Logistics Digital Twins – The Finale: A Network That Redesigns Itself

You don’t have a logistics problem. You have a trade-off problem.

Most networks try to solve trade-offs through local heroics: saving a customer order, protecting a cutoff, keeping a hub “green.” The catch is that every local win can create an enterprise loss, because the network pays the bill somewhere else: premium freight, split shipments, emergency inventory moves, overtime volatility, or downstream congestion.

This is the orchestration failure pattern: local optimization driving global inefficiencies. And it explains why visibility alone doesn’t change outcomes.

In un-orchestrated networks, 15–25% of operational spend becomes reactive recovery costs that wouldn’t exist at this scale if the network could make trade-offs explicitly and two steps ahead.


The prize: What orchestration prevents

The Orchestration System is the enterprise layer that continuously optimizes network decisions, allocation, promises, inventory moves, and mode/route choices, within explicit guardrails, and pushes decisions into execution.

It prevents three things executives care about:

1) Margin shocks disguised as “service saves.”
Orchestration stops premium moves and emergency measures from becoming the default recovery mechanism. It makes expediting deliberate, not habitual.

2) Organizational arbitrage replacing decision-making.
In many networks, enterprise trade-offs happen through calls, chats, and escalation threads. The loudest voice or most urgent customer wins. That’s not a decision system—it’s organizational arbitrage. Orchestration makes trade-offs explicit, repeatable, and governable.

3) A network designed on assumed averages.
Without orchestration, network design is often updated using assumed stability. Orchestration closes the loop between real variability and structural redesign—so the network gets better over time, not just busier.


A short recap

Parts 1–3 in this series showed how to generate decision-grade commitments from hubs, ports, and warehouses. Part 4 shows what becomes possible when those commitments feed enterprise decisions.

Orchestration only works if local twins produce credible commitments, otherwise you automate bad assumptions.


Control tower vs Orchestration System

A control tower answers: What is happening? Where are we off plan?
An orchestration system answers: What should we do now, across the whole network, and what trade-offs are we willing to make?

That shift matters because the real challenge isn’t finding exceptions. It’s choosing the best response for the network, not for one function, site, or KPI.


Three orchestration examples

  1. Promise vs Expedite

A key customer order is at risk because a hub is congested and the planned linehaul will miss cutoff. A control tower flags red; the typical response is premium transport because saving service is culturally rewarded.

The Orchestration System forces the right question: Is expediting value-creating—or just habit? High-criticality/high-margin orders may get premium moves. Others are re-promised early to protect the week’s flow. The win isn’t “never expedite.” It’s expedite deliberately.

  • Flow-path Decisions

Inbound arrives at a DC and put-away looks sensible: it looks tidy and it “uses available space and capacity.” But downstream demand is building elsewhere, and replenishment lead times mean tomorrow you’ll ship partials or split loads, triggering premium moves.

Orchestration treats this as a network decision, not a site preference. It may cross-dock a portion immediately to protect demand, put away the rest, and adjust allocation logic for 48 hours. This prevents transport from paying for warehouse decisions later.

This is where cost-to-serve stops being a spreadsheet exercise and becomes daily behavior.

  • Mode Switching

A disruption hits and the instinct is to buy speed: air, premium road, diversions. Sometimes it’s right. Often it protects today by creating tomorrow’s congestion and cost.

The Orchestration System evaluates mode switching through a network lens: will it protect a critical customer or consume scarce capacity and trigger more premium moves tomorrow? It may switch mode for a narrow segment, reroute some flows, and re-promise early elsewhere.


What it takes: guardrails + decision rights

Orchestration is not an algorithm issue first. It’s a governance and decision-rights topic, supported by technology.

Three requirements separate orchestration from spreadsheets and escalations:

1) Decision-grade commitments from the operational twins.
The elements discussed in articles 1–3 deliver the inputs: credible capacity, timing, and constraint signals that can be trusted at enterprise level.

2) Guardrails that make trade-offs governable.
Not rigid policies, boundaries that stop you from “winning today by breaking the network,” such as margin floors, service-tier rules, capacity protection for critical nodes, and risk/compliance constraints.

3) Clear decision rights.
Who can change appointments, promises, allocations, and modes—when constraints change? Without decision rights, orchestration collapses back into escalation threads.


The final concept: the Run >> Shape flywheel

Orchestration is not only how you run the network. It’s how you continuously redesign it.

  • Run (today/this week): allocate, promise, re-balance inventory, mode-switch, reroute, using real commitments from hubs and flows.
  • Shape (this quarter/this year): redesign hub roles, buffers, footprint, and route portfolio using the variability the twins actually observed.

This is the ultimate win: run-data replaces assumed averages. Network design stops being an annual spreadsheet ritual and becomes a learning system, the network improves structurally, not just operationally.


Where AI fits

AI won’t fix unclear decision rights or bad guardrails. It will just automate them faster. AI won’t magically solve enterprise trade-offs; you still need to define what’s worth optimizing for.

But when the foundations are right, AI matters in three concrete ways:

  • Sense earlier: better prediction of variability and knock-on effects, so decisions happen before chaos locks in.
  • Decide faster: AI-assisted optimization and agentic approaches can propose and test actions continuously, compressing the cycle from exception to action.
  • Learn over time: the system improves decision rules based on what worked in reality, turning orchestration into a learning engine, not just a faster planner.

AI is an accelerant for orchestration, not a substitute for governance.


How to start

Start with one enterprise decision and make it measurable: promise vs expedite, flow-path choices, or mode switching. Define guardrails first. Use commitments from Parts 1–3 as inputs. Run a closed loop (decide → execute → learn). Expand scope only when trust is earned.

What questions to ask

  1. What share of our operational spend is reactive recovery vs planned execution?
  2. Who has explicit authority to make enterprise trade-offs—and what guardrails constrain them?
  3. Are we measuring hubs and flows on local efficiency or network contribution?
  4. When we “save” a customer order, do we know what it cost the network?
  5. Is our network design based on what actually happens—or what we assumed would happen?

Closing

The network you have today is the result of a thousand local optimizations. The network you need tomorrow is the result of designing trade-offs explicitly—and learning from what actually happens, not what you assumed. That’s what the Orchestration System delivers and a network that becomes structurally better over time.

Logistics Digital Twins: Why Now and Why the Hub Is the Starting Point

Most leadership teams don’t suffer from a lack of logistics data. They suffer from a lack of decision ready insights.

You may know where containers are, which trucks are late, and which distribution center is backed up. Yet the response still looks familiar: expediting, overtime, buffer inventory, manual replanning, escalation calls, and operational heroics.

This is the first article in a four-part series on logistics digital twins, how they move logistics from visibility to control, why the hub is the logical starting point, and how to scale from hub twins to enterprise orchestration.


Why this series now

Four forces are converging:

1) Cost-to-serve is under pressure in places leaders don’t always see
Detention, premium freight, missed cutoffs, overtime volatility, rework, and buffer inventory can look like “operational noise.” At scale, they shape margin and working capital far more than most planning discussions acknowledge.

2) Service expectations are rising while tolerance for buffers is shrinking
Customers expect tighter and more reliable delivery promises. Meanwhile, the classic insurance policies, extra inventory, spare capacity, and manual intervention—have become expensive.

3) Volatility has become structural
Congestion, weather events, labour constraints, and capacity swings are no longer exceptions. In many networks they are the baseline, they ripple across modes and hubs faster than traditional weekly planning cycles can absorb.

4) Sustainability is moving from reporting to operations
The biggest emission levers in logistics are operational: waiting vs flowing, routing, mode selection, idling, rehandling, and expediting. You cannot manage carbon seriously without managing variability seriously.


The value of logistics digital twins

Service reliability. A logistics digital twin improves the credibility of your promises by continuously reconciling plan versus reality. Instead of relying on averages, it helps you anticipate bottlenecks and protect cutoffs, so customer commitments become more stable and exceptions become less frequent.

Cost-to-serve and productivity. Twins reduce the hidden costs of variability: queues, idling, rework, overtime spikes, and premium transport decisions made under pressure. Over time, they turn constrained assets, labour, docks, cranes, yards, into capacity you can actually plan against.

Resilience. A twin gives you a repeatable way to respond to disruptions. You can test scenarios, predefine playbooks, and replan faster, reducing reliance on ad-hoc escalation and individual heroics.

Sustainability. By reducing waiting, unnecessary speed-ups, and expediting, twins cut emissions where it matters most—inside day-to-day operations. Just as importantly, they make trade-offs explicit: service vs cost vs carbon, supported by data rather than intuition.


What a logistics digital twin is

A logistics digital twin is a closed-loop system that links real-time logistics events to prediction, simulation, and optimization, so decisions improve continuously across hubs, flows, and the wider network.

What it isn’t:

  • A 3D visualization
  • A dashboard-only control tower
  • A big-bang model of everything

If the twin doesn’t change decisions, it’s not a twin. It’s reporting.


Where the technology stands today

Mature and accelerating. The foundational building blocks are now broadly available: event streaming from operational systems, predictive models for ETAs and handling-time variability, simulation to stress-test plans, and optimization to sequence scarce resources. AI is also improving the speed and quality of replanning, especially in exception handling and dynamic decision support.

Still hard (and why programs stall). The toughest challenges are cross-party data access and identity matching, proving models are decision-grade, and getting decision rights and operating rhythms clear. In practice, governance of decisions matters as much as governance of data.


The three layers of logistics digital twins

  • Hub twins: ports, terminals, DCs; manage capacity, queues, sequencing, labor and equipment.
  • Flow layer: between hubs; manage ETA variability, corridor constraints, routing under disruption.
  • Orchestration twin: across the network; manage allocation, promise logic, mode switching, scenarios, and network design choices.

This series starts at the hub for a reason.


Why it’s logical to start at the hub level

When companies say “we want an end-to-end digital twin,” they usually mean well and then get stuck.

The fastest path to value is to begin at the hub level because hubs offer four advantages:

1) You can control outcomes. Hubs have clear operational levers: sequencing, scheduling, prioritization, and resource deployment. When those decisions improve, results show up quickly in throughput, dwell time, and service reliability.

2) Data is more attainable. Hub data typically sits in a smaller number of systems with clearer ownership. That is a far easier starting point than cross-company, end-to-end integration.

3) Hub wins compound across the network. A reliable hub stabilizes upstream and downstream. If arrivals are smoother and throughput is predictable, you reduce knock-on effects across transport legs.

4) Orchestration depends on commitments, not guesses. Enterprise orchestration only works if hubs provide credible capacity and timing commitments. Otherwise the network plan is built on wishful thinking.

If you remember one line from this article, make it this: If you can’t predict and control your hubs, your network twin will only automate bad assumptions.


The minimum viable twin (how to start without boiling the ocean)

A minimum viable logistics digital twin has five ingredients:

  1. A short list of critical events you can capture reliably
  2. A state model that represents capacity, queues, backlog, and resources
  3. A decision loop with a replanning cadence and exception triggers
  4. Clear decision rights: who can override what, and when
  5. Two or three KPIs leadership will sponsor and use consistently

The most reliable way to get traction is to pick one flagship hub use case and scale from there.

In the next two articles, we’ll look at two examples: sea freight and ports (high constraints, many actors), and road transport and warehouses (high frequency, direct cost-to-serve impact). We’ll close with orchestration and network design—where “run” data replaces assumed averages.

AI in 2026: From Experimentation to Implementation

2026 will mark the transition from AI experimentation to pragmatic implementation with significant emphasis on return on investment, governance, and agentic AI systems. The hype bubble has deflated, replaced by hard-nosed business requirements and measurable outcomes. CFOs become AI gatekeepers, speculative pilots get killed, and the discussion moves to “which AI projects drive profit?” In that context, strategic shifts matter most for boards and executive teams—and seven conditions will separate winners from the rest.


Shift 1 – From Hype to Hard Work: AI Factories in an ROI-Driven World

The first shift is financial discipline. Analysts expect enterprises will defer roughly 25% of planned AI spend into 2027 as CFOs insist on clear value, not proof-of-concept experiments. Only a small minority of organisations can currently point to material EBIT impact from AI, despite wide adoption.

The era of “let’s fund ten pilots and see what sticks” is ending. Funding flows to organisations that behave more like AI factories: they standardise how use cases are sourced, evaluated, industrialised and governed, with shared platforms rather than bespoke experiments.

What this means for leadership in 2026

  • Every AI initiative needs explicit, P&L-linked metrics (revenue, cost, margin) and a timebox for showing impact.
  • Expect your CFO to become a co-owner of the AI portfolio—approving not just spend, but the value logic.
  • The key maturity question is shifting from “Do we use AI?” to “How many AI use cases are scaled, reused and governed?”

Shift 2 – AI Teammates in Every Role: Work Gets Re-Architected

By the end of 2026, around 40% of enterprise applications are expected to embed task-specific AI agents, and a similar share of roles will involve working with those agents. These are not just chatbots; they are digital colleagues handling end-to-end workflows in sales, service, finance, HR and operations.

Research from McKinsey and BCG suggests a simple rule of thumb: successful AI transformations are roughly 10% algorithms, 20% technology and data, and 70% people and processes. High performers are three times more likely to fundamentally redesign workflows than to automate existing ones.

What this means for leadership in 2026

  • Ask less “Which copilot can we roll out?” and more “What would this process look like if we assumed agents from day one?”
  • Measure success in cycle time, error rates and processes eliminated, not just productivity per FTE.
  • Treat “working effectively with agents” as a core competency for managers and professionals.

Shift 3 – New Org Structures: CAIOs, AI CoEs and Agent Ops

As AI moves into the core of the business, organisational design is following. A small but growing share of large companies now appoint a dedicated AI leader (CAIO or equivalent), accountable for turning AI strategy into business outcomes and for managing risk.

The workforce pyramid is shifting as well. Entry-level positions are “quietly disappearing”—not through layoffs, but through non-renewal—while AI-skilled workers command wage premiums of 50%+ in some markets and rising.

This drives three structural moves:

  • AI Centres of Excellence evolve from advisory teams into delivery engines that provide reference architectures, reusable agents and enablement.
  • “Agent ops” capabilities emerge—teams tasked with monitoring, tuning and governing fleets of agents across the enterprise.
  • Career paths split between traditional functional tracks and “AI orchestrator” tracks.

What this means for leadership in 2026

  • Clarify who owns AI at ExCo level—and whether they have the mandate to say no as well as yes.
  • Ensure your AI CoE is set up to ship and scale, not just write guidelines.
  • Start redesigning roles, spans of control and career paths on the assumption that agents will take over a significant share of routine work.

Shift 4 – Governance and Risk: From Optional to Existential

By the end of 2026, AI governance will be tested in courtrooms and regulators’ offices, not only in internal committees. Analysts expect thousands of AI-related legal claims globally, with organisations facing lawsuits, fines and in some cases leadership changes due to inadequate governance.

At the same time, frameworks like the EU AI Act move to enforcement, particularly in high-risk domains such as healthcare, finance, HR and public services. In parallel, many organisations are introducing “AI free” assessments to counter concerns about over-reliance and erosion of critical thinking.

What this means for leadership in 2026

  • Treat AI as a formal risk class alongside cyber and financial risk, with explicit classifications, controls and reporting.
  • Expect to demonstrate traceability, explainability and human oversight for consequential use cases.
  • Recognise that governance failures can quickly become CEO- and board-level issues, not just CIO problems.

Shift 5 – The Data Quality Bottleneck

The fifth shift is about the constraint that matters most: data quality. Across multiple sources, “AI-ready data” emerges as the primary bottleneck. Companies that neglect it could see productivity losses of 15% or more, with widespread AI initiatives missing their ROI targets due to poor foundations.

Most companies have data. Few have AI-ready data: unified, well-governed, timely, with clear definitions and ownership.

On the infrastructure side, expect a shift from “cloud-first” to “cloud where appropriate,” with organisations seeking more control over cost, jurisdiction and resilience. On the environmental side, data-centre power consumption is becoming a visible topic in ESG discussions, forcing hard choices about which workloads truly deserve the energy and capital they consume.

What this means for leadership in 2026

  • Treat critical data domains as products with clear owners and SLAs, not as exhaust from processes and applications.
  • Make data readiness a gating criterion for funding AI use cases.
  • Infrastructure and model choices are now strategic bets, not just IT sourcing decisions.

Seven Conditions for Successful AI Implementation in 2026

Pulling these shifts together, here are seven conditions that separate winners from the rest:

FINANCIAL FOUNDATIONS

1. Financial discipline first

  • Tie every AI initiative to specific P&L metrics and realistic value assumptions.
  • Kill or re-scope projects that cannot demonstrate credible impact within 12–18 months.

2. Build an AI factory

  • Standardise how you source, prioritise and industrialise use cases.
  • Focus on a small number of high-value domains and build shared platforms and solution libraries instead of one-off solutions.

OPERATIONAL EXCELLENCE

3. Redesign workflows around agents (the 10–20–70 rule)

  • Assume that only 10% of success is the model and 20% is tech/data; the remaining 70% is people and process.
  • Measure progress in terms of processes simplified or eliminated, not just tasks automated.

4. Treat data as a product

  • Invest in “AI-ready data”: unified, well-governed, timely, with clear definitions and ownership.
  • Make data readiness a gating criterion for funding AI use cases.

5. Governance by design, not retrofit

  • Mandate governance from day one: model inventories, risk classification, human-in-the-loop for high-impact decisions.
  • Build transparency, explainability and audit trails into systems upfront.

ORGANISATIONAL CAPABILITY

6. Organise for AI: leadership, CoEs and agent operations

  • Clarify executive ownership (CAIO or equivalent), empower an AI CoE to execute, and stand up agent-ops capabilities to monitor and steer your digital workforce.

7. Commit to continuous upskilling

  • Assume roughly 44% of current skills will materially change over the next five years; treat AI literacy and orchestration skills as mandatory.
  • Invest more in upskilling existing talent than in recruiting “unicorns.”

The Bottom Line

The defining question for 2026 is no longer “Should we adopt AI?” but “How do we create measurable value from AI while managing its risks?”

The performance gap is widening fast: companies redesigning workflows are pulling three to five times ahead of those merely automating existing processes. By 2027, this gap will be extremely hard to close.

Boards and executive teams that answer this through focused implementation, genuine workflow redesign, responsible governance and continuous workforce development will set the pace for the rest of the decade. Those that continue treating AI as experimentation will find themselves competing against organisations operating at multiples of their productivity, a gap will be very hard to recover from.


Five AI Breakthroughs From 2025 That Will Show Up in Your P&L

A year ago, if you asked an AI to handle a complex customer refund, it might draft an email for you to send.

As 2025 comes to a close, AI agents in some organisations can now check the order history, verify the policy, process the refund, update several systems, and send the confirmation. That is not just a better copilot; it is a different category of capability.

Throughout 2025, the story has shifted from “we are running pilots” to where AI is quietly creating real value inside the enterprise: agents that execute multi-step workflows, voice AI that resolves problems end-to-end, multimodal AI that works on the messy mix of enterprise information, sector-specific applications in life sciences and healthcare, industrial and manufacturing, consumer industries and professional services, and more reliable systems that leaders are prepared to trust with high-stakes work.

This newsletter focuses on what is genuinely possible by the end of 2025 that was hard, or rare at the end of 2024 and where new value pools are emerging.


1. From copilots to autonomous workflows

End of 2024, most enterprise AI lived in copilots and Q&A over knowledge bases. You prompted; the system responded, one step at a time.

By the end of 2025, leading organisations are using AI agents that can run a full workflow: collect inputs, make decisions under constraints, act in multiple systems, and report back to humans at defined checkpoints. They combine memory (what has already been done), tool use (which systems to use), and orchestration (what to do next) in a way that was rare a year ago.

New value pools

  • Life sciences and healthcare: automating  start-up administration, safety case intake, and medical information requests so clinical and medical teams focus on judgement, not paperwork.
  • Industrial and manufacturing: agents handling order-to-cash or maintenance workflows end-to-end. From reading emails and work orders to updating ERP and scheduling technicians.
  • Professional services: agents that move proposals, statements of work, and deliverables through review, approval and filing, improving margin discipline and cycle time.

2. Voice AI as a frontline automation channel

At the end of 2024, voice AI mostly meant smarter voice responses: slightly better menus, obvious hand-offs to humans, and limited ability to handle edge cases.

By the end of 2025, voice agents can hold natural two-way conversations, look up context across systems in real time, and execute the simple parts of a process while the customer is still on the line. For a growing part of the call mix, “talking to AI” is now an acceptable – sometimes preferred – experience.

New value pools

  • Consumer industries: automating high-volume inbound queries such as order status, returns, bookings, and loyalty program questions, with seamless escalation for the calls that truly need an expert.
  • Life sciences and healthcare: patient scheduling, pre-visit questionnaires, follow-up reminders, and simple triage flows, integrated with clinical and scheduling systems.
  • Cross-industry internal support: IT and HR helpdesks where a voice agent resolves routine issues, captures clean tickets, and routes only non-standard requests to human staff.

3. Multimodal AI and enterprise information

Most early deployments of generative AI operated in a text-only world. The reality of large organisations, however, is multimodal: PDFs, decks, images, spreadsheets, emails, screenshots, sensor data, and more.

By the end of 2025, leading systems can read, interpret, and act across all of these. They can navigate screens, and combine text, tables, and images in a single reasoning chain. On the creation side, they can generate on-brand images and videos with consistent characters and scenes, good enough for many marketing and learning use cases.

New value pool

  • Life sciences and healthcare: preparing regulatory and clinical submission packs by extracting key data and inconsistencies across hundreds of pages of protocols, reports, and correspondence.
  • Industrial and manufacturing: combining images, sensor readings, and maintenance logs to flag quality issues or emerging equipment failures before they hit output.
  • Consumer and professional services: producing localised campaigns, product explainers, and internal training content in multiple languages and formats without linear increases in agency spend.

4. Sector-specific impact in the P&L

In 2024, many sector examples of AI looked impressive on slides but were limited in scope. By the end of 2025, AI is starting to move core economics in several industries.

In life sciences and healthcare, AI-driven protein and molecule modelling shortens early discovery cycles and improves hit rates, while diagnostic support tools help clinicians make better real-time decisions. In industrial and manufacturing businesses, AI is layered onto predictive maintenance, scheduling, and quality control to improve throughput and reduce downtime. Consumer businesses are using AI to personalise offers, content, and service journeys at scale. Professional services firms are using AI for research, drafting, and knowledge reuse.

New value pools

  • Faster innovation and time-to-market: from earlier drug discovery milestones to quicker design and testing cycles for industrial products and consumer propositions.
  • Operational excellence: higher asset uptime, fewer defects, better utilisation of people and equipment across plants, networks, and service operations.
  • Revenue and margin uplift: more profitable micro-segmentation in consumer industries, and higher matter throughput and realisation rates in professional and legal services.

5. When AI became trustworthy enough for high stakes work

Through 2023 and much of 2024, most organisations treated generative AI as an experiment.

By the end of 2025, two developments make it more realistic to use AI in critical workflows. First, dedicated reasoning models can work step by step on complex problems in code, data, or law, and explain how they arrived at an answer. Second, governance has matured: outputs are checked against source documents, policies are encoded as guardrails, and model risk is treated like any other operational risk.

New value pools

  • Compliance and risk: automated checks of policies, procedures, and documentation, with AI flagging exceptions and assembling evidence packs for human review.
  • Legal and contract operations: first pass drafts and review of contracts, research memos, and standard documents, with lawyers focusing on negotiation and high judgement work.
  • Financial and operational oversight: anomaly detection, narrative reporting, and scenario analysis that give CFOs and COOs a clearer view of where to intervene.

What this sets up for 2026

Everything above is the backdrop for 2026 – a year that will be less about experimentation and more about pragmatic implementation under real financial and regulatory scrutiny.

In my next newsletter, I will zoom in on:

  • Five strategic shifts – including the move from hype to “AI factories” with CFOs as gatekeepers, agents embedded in everyday roles, new organisational structures (CAIOs, AI CoEs, agent ops), governance moving from optional to existential, and the data-quality bottleneck that will decide who can actually scale.
  • Seven conditions for success – the financial, operational, and organisational foundations that separate companies who turn AI into EBIT from those who stay stuck in pilots.

Rather than extend this piece with another checklist, I will leave you with one question as 2025 closes:

Are you treating today’s AI capabilities as isolated experiments – or as the building blocks of the AI factory, governance, data foundations, and workforce that your competitors will be operating in 2026?

In the next edition, we will explore what it takes to answer that question convincingly.

Is There Still a Future for ERP & CRM in an AI-Driven Enterprise?

Why your next ERP/CRM decision is really about agents, data platforms, and money flows.

Most large organisations are in a similar place:

  • An ageing ERP landscape (often several instances)
  • Fragmented or underused CRM
  • Rapidly growing investments in cloud data platforms and AI
  • A board asking, “What’s our plan for the next 5–10 years?”

For the last two decades, the core question was simple:

Which suite do we standardise on?

In an AI- and agent-driven world, the question becomes more strategic:

Will our core really be ERP and CRM suites – or will it be data platforms and agents that just happen to talk to them?

From what I see in digital and AI transformations, three futures for ERP and CRM are emerging. They’re not mutually exclusive, but where you place your bets will shape your architecture, cost base and operating model for a decade.


Three futures for ERP & CRM

Option 1 – AI-Augmented ERP & CRM

In the first future, ERP and CRM remain your system of record and primary process engine.

The change comes from infusing them with AI:

  • Copilots and assistants embedded in finance, supply chain, HR, sales and service
  • Predictive models for forecasting, anomaly detection and planning
  • Built-in automation and recommendations inside the suite

The transformation journey is familiar: upgrade or replace core suites, rationalise processes, improve data, and switch on the AI capabilities that are now part of the platform.

The advantage is continuity: the mental model of “core systems” barely changes. The risk is spending heavily to recreate yesterday’s processes on a new, AI-decorated core.


Option 2 – Thin Core with an Agentic Front End

In the second future, ERP and CRM are still critical, but they are no longer the system of work people experience every day.

You introduce an agentic and workflow layer on top:

  • End-to-end journeys like lead-to-cash or source-to-pay are modelled and executed in this layer
  • Agents and orchestrated workflows call into ERP, CRM, HR and bespoke systems as needed
  • Employees increasingly interact with unified workspaces and conversational agents, rather than individual applications

ERP and CRM become transactional backbones and data providers. The real differentiation – and day-to-day productivity – lives in the orchestration layer.

This opens up flexibility and speed, but it also adds a powerful new layer that must be governed and paid for.


Option 3 – The Agentic Enterprise (Beyond ERP & CRM as Products)

In the third future, ERP and CRM stop being “big systems you buy” and become behaviours of your architecture.

  • Core business facts (orders, inventory, contracts, customer interactions) live in event streams, ledgers and shared data platforms, not only inside monolithic applications
  • Agents and policy engines handle much of the business logic and user interaction
  • Composable services provide domain capabilities – pricing, risk, subscriptions, entitlements – which agents combine to run processes

In this world, your data and event platforms are as central to running the business as any traditional application suite. ERP and CRM don’t disappear as concepts, but they are no longer the obvious centre of gravity.

Very few organisations are here end-to-end today, but many are already making decisions that either keep this option open – or quietly close it off.


Who is shaping these futures?

Once you have the three options in mind, it’s easier to see how the main players line up.

1. The suite giants – anchoring Option 1

The large business application vendors are doubling down on AI-augmented ERP and CRM – their suites for finance, operations, HR, sales and service:

  • SAP – core finance and supply chain suite, plus customer experience applications
  • Microsoft – Dynamics 365 for finance, operations, sales and customer service
  • Salesforce – cloud platform for sales, service and marketing
  • Oracle – cloud applications for finance, operations, HR and customer experience
  • Workday – integrated platform for HR and finance
  • ServiceNow – backbone for IT, employee & customer service in many organisations

Their common play:

  • Modernise their suites
  • Embed copilots and domain agents
  • Extend their own low-code and workflow tools

Goal: keep the system of record and main process engine in their platform, and make it smarter.


2. Agentic & workflow fronts – powering Option 2

A second cluster focuses on becoming your system of work – the main place where employees and agents operate.

Suite-centric fronts:

  • Microsoft: Power Platform and Copilot as the agentic layer across Dynamics and Microsoft 365
  • Salesforce: Agentforce and Slack as the agentic front for CRM and analytics
  • SAP: Joule and SAP Build/BTP to orchestrate across S/4HANA and line-of-business apps
  • Workday: emerging agent frameworks on its unified data model
  • ServiceNow: Now Platform with AI Agents and workflows across IT, employee and customer service

Vendor-neutral fronts:

  • Pega, Appian, OutSystems, Mendix – workflow and low-code platforms used to model and run journeys that cut across multiple systems
  • UiPath, Automation Anywhere – automation and “agentic” platforms that orchestrate work across ERP, CRM and legacy
  • Celonis and other process-intelligence tools – providing the process “map” and telemetry layer that agents need

All of them are, in different ways, working to own the agentic front end over a mixed application estate.


3. Cloud & data platforms – foundations for Options 2 and 3

Cloud and data platforms are the quiet foundation for the second and third futures:

  • Hyperscalers: AWS, Microsoft Azure, Google Cloud – providing compute, managed models, agent frameworks (e.g. Amazon Q/Bedrock, Azure OpenAI/Fabric, Google Vertex)
  • Data platforms: Snowflake, Databricks, and cloud-native warehouses and lakehouses

Increasingly, these platforms hold the shared operational truth: the consolidated view of customers, products, transactions and events that both applications and agents rely on.

Many organisations are already investing heavily here. The strategic question is whether these platforms remain analytics add-ons, or become part of your core system-of-record and execution layer.


4. AI-native and event-sourced challengers – the Option 3 edge

A final group rethinks ERP-like capabilities from scratch:

  • Rillet, ContextERP and other AI-native or event-sourced ERPs
  • Vertical or regional challengers that are event-driven, API-first and agent-friendly

Today they mostly play in mid-market segments or specific industries, but architecturally they look closest to the Option 3 end-state.


What the options mean when you start from legacy

Most organisations don’t choose between these options on a clean sheet. They start from multiple ERPs, several CRMs, custom code and fragmented data.

So what does it mean to lean into each path?

Leaning into Option 1 – modernise & augment the core

You are committing to:

  • Selecting strategic ERP/CRM suites and running classic, multi-year core transformations
  • Using the move to modern platforms to simplify processes and master data, not just lift-and-shift
  • Turning on embedded AI features where they are safe and valuable

Technology leaders clear technical debt and consolidate control. Finance leaders get large but relatively predictable investments with a familiar licence profile. Business leaders gain stability and better data, but day-to-day work may feel similar – just on a newer system.

The risk: over-indexing on the core and delaying cross-silo improvements.


Leaning into Option 2 – build an agentic layer on top

You are choosing to:

  • Make one or two workflow / agent / low-code platforms your main improvement engine
  • Redesign end-to-end journeys that span multiple systems
  • Put agents and orchestrated workspaces in front of employees, and increasingly, customers

Done well, this can deliver visible progress in 12–24 months without waiting for every core system to be replaced.

But it also changes your cost and control model:

  • You may reduce some “power user” licences in ERP/CRM
  • You increase consumption spend on orchestration platforms, data platforms and AI inference

It is not automatically cheaper. It is a reallocation of spend from application licences to data, AI and orchestration – and it must be managed that way.


Steering towards Option 3 – design for an agentic, data-centric future

Very few organisations will jump straight to Option 3, but you can lean in that direction when you invest:

  • Build new capabilities (for example, subscription management, partner platforms, pricing engines) as services on top of shared data and events, not as deep customisations inside ERP
  • Let more business logic live in agents and policy layers that call into applications, rather than being fully hard-coded in those applications
  • Treat your data platform as part of the operational nervous system, not just the reporting layer

This demands stronger engineering and architecture capabilities and a board that understands it is a long-term platform strategy, not a one-off project.


No-regret moves for the next 24 months

Whatever balance you choose between the three futures, some steps are almost always sensible.

1. Stabilise and simplify the core

  • Retire the most fragile legacy systems
  • Reduce bespoke code where it doesn’t create differentiation
  • Use any ERP/CRM upgrade to simplify processes and data, not just modernise technology

2. Pick your strategic orchestration and agent platforms

  • Decide whether your main system of work will be suite-centric or vendor-neutral
  • Avoid ending up with multiple, overlapping agentic layers because different teams picked their own favourites

3. Use process intelligence as the map for agents

You should not unleash agents on processes you don’t understand.

  • Use process mining and process intelligence (for example, Celonis, Signavio and similar tools) to discover how key flows actually run and where the real bottlenecks and risks are
  • Treat this as the map and telemetry system for your agent strategy: it tells you where to start, and whether changes are helping or hurting

4. Start with bounded agent use cases and clear governance

  • Begin where agents prepare work for humans or act within tight financial and policy limits
  • Put in place shared governance for agents: which systems they can touch, what actions they can take automatically, and how you monitor them

ERP, CRM and the long game of AI

ERP and CRM are not going away. But they are no longer the only, or even the obvious, centre of gravity.

Over the next decade, three design choices will matter more than any feature list:

  • Where your core operational data and system-of-record live – primarily in suites, primarily in shared data platforms, or a deliberate mix
  • Where your business logic runs – inside applications, in an agentic layer, or in composable services
  • Where your money flows – mostly into licences and implementation, or increasingly into cloud data and AI consumption

The real risk is not picking the “wrong” vendor.
It is drifting into an AI and agent future that recreates today’s complexity and cost in a new shape.

The organisations that pull ahead will be the ones whose executive teams treat this as a shared design decision, not just an IT refresh – and consciously decide how far they want to travel from Option 1, through Option 2, towards Option 3.

Why 88% of Companies Use AI but Only 6% See Real Results: What McKinsey’s Research Really Tells Us

Over the past year, McKinsey – itself busy reinventing its business model with AI – has published a constant flow of AI research: adoption surveys, sector deep-dives, workforce projections, technology roadmaps. I’ve read these at different moments in time. For this newsletter, I synthesized 25 of those reports into one overview (leveraging NotebookLM).

The picture that emerges is both clearer and more confronting than any of the individual pieces on their own.

The headline is simple: AI is now everywhere, but real value is highly concentrated. A small group of “AI high performers” is pulling away from the pack—economically, organizationally, and technologically. The gap is about to widen further as we move from today’s generative tools to tomorrow’s agentic, workflow-orchestrating systems.

This isn’t a technology story. It’s a strategy, operating model, and governance story.


AI is everywhere – value is not

McKinsey’s research shows that almost 9 in 10 organizations now use AI somewhere in the business, typically in one function or a handful of use cases. Yet only about a third are truly scaling AI beyond pilots, and just 6% can attribute 5% or more EBIT uplift to AI.

Most organizations are stuck in what I call the “pilot loop”:

  1. Launch a promising proof of concept.
  2. Prove that “AI works” in a narrow setting.
  3. Hit organizational friction – ownership, data, process, risk.
  4. Park the use case and start another pilot.

On paper, these companies look active and innovative. In reality, they are accumulating “AI debt”: a growing gap between what they could achieve and what the real leaders are already realizing in terms of growth, margin, and capability.

The research is clear: tools are no longer a differentiator. Your competitive position is defined by your ability to industrialize AI – to embed it deeply into how work is done, not just where experiments are run.


The 6% success factors: what AI high performers actually do

The small cohort of high performers behaves in systematically different ways. Four contrasts stand out:

  1. They pursue growth, not just efficiency
    Most organizations still frame AI as a cost and productivity story. High performers treat efficiency as table stakes and put equal weight on new revenue, new offerings, and new business models. AI is positioned as a growth engine, not a shared-service optimization tool.
  2. They redesign workflows, not just add tools
    This is the single biggest differentiator. High performers are almost three times more likely to fundamentally redesign workflows around AI. They are willing to change decision rights, process steps, roles, and controls so that AI is embedded at the core of how work flows end-to-end.
  3. They lead from the C-suite
    In high performers, AI is not owned by a digital lab, an innovation team, or a single function. It has visible, direct sponsorship from the CEO or a top-team member, with clear, enterprise-wide mandates. That sponsorship is about more than budget approval; it’s about breaking silos and forcing trade-offs.
  4. They invest at scale and over time
    Over a third of high performers dedicate more than 20% of their digital budgets to AI. Crucially, that spend is not limited to models and tools. It funds data foundations, workflow redesign, change management, and talent.

Taken together, these behaviours show that AI leadership is a management choice, not a technical one The playbook is available to everyone, but only a few are willing to fully commit.


The workforce is already shifting – and we’re still early

McKinsey’s data also cuts through a lot of speculation about jobs and skills. Three signals are particularly important:

  • Workforce impact is real and rising
    In the past year, a median of 17% of respondents reported workforce reductions in at least one function due to AI. Looking ahead, that number jumps to 30% expecting reductions in the next year as AI scales further.
  • The impact is uneven by function
    The biggest expected declines are in service operations and supply chain management, where processes are structured and outcomes are measurable. In other areas, hiring and reskilling are expected to offset much of the displacement.
  • New roles and skills are emerging fast
    Organizations are already hiring for roles like AI compliance, model risk, and AI ethics, and expect reskilling efforts to ramp up significantly over the next three years.

The message for leaders is not “AI will take all the jobs,” but rather:

If you’re not deliberately designing a human–AI workforce strategy that covers role redesign, reskilling, mobility, governance implications, it will happen to you by default.


The next wave: from copilots to co-workers

Most of the current adoption story is still about generative tools that assist individual knowledge workers: drafting content, summarizing documents, writing code.

McKinsey’s research points to the next phase: Agentic AI – systems that don’t just respond to prompts but plan, orchestrate, and execute multi-step workflows with limited human input.

Three shifts matter here:

  1. From tasks to workflows
    We move from “AI helps write one email” to “AI manages the full case resolution process”—from intake to investigation, decision, and follow-up.
  2. From copilots to virtual co-workers
    Agents will interact with systems, trigger actions, call APIs, and collaborate with other agents. Humans move further upstream (framing, oversight, escalation) and downstream (relationship, judgement, exception handling).
  3. From generic tools to deep verticalization
    The most impactful agents will be highly tailored to sector and context: claims orchestration in insurance, demand planning in manufacturing, clinical operations in pharma, and so on.

Today, around six in ten organizations are experimenting with AI agents, but fewer than one in ten is scaling them in any function. The gap between high performers and everyone else is set to widen dramatically as agents move from proof of concept to production.


So what should leaders actually do?

The gap between high performers and everyone else is widening now, not in five years. As agentic AI moves from proof of concept to production, the organizations still running pilots will find themselves competing against fundamentally different operating models—ones that are faster, more scalable, and structurally more profitable.

If you sit on an executive committee or board, you might start with these questions:

  1. Ambition – Are we using AI mainly to cut cost, or do we have a clear thesis on how it will create new revenue, offerings, and business models?
  2. Workflow rewiring – For our top 5–10 value pools, have we actually redesigned end-to-end workflows around AI, or are we just bolting tools onto legacy processes?
  3. Ownership – Who on the top team is truly accountable for AI as an enterprise-wide agenda—not just for “experiments,” but for operating model, risk, and value delivery?
  4. Workforce strategy – Do we have a concrete plan for role redesign, reskilling, and new AI governance roles over the next 3–5 years, backed by budget?
  5. Foundations and governance – Are we treating data, infrastructure, and sustainability as strategic assets, with the same rigor as financial capital and cybersecurity?

The era of casual experimentation is over. McKinsey’s research makes one thing brutally clear: the organizations that will dominate the agentic era won’t be those with the most impressive demos or the longest list of pilots. but those willing to answer “yes” to all five questions – and back those answers with real budget, real accountability, and real organizational change.

The 6% are already there. The question is whether you’ll join them—or explain to your board why you didn’t.

Where Copilot Actually Saves Time, and How to Make It Happen!

Microsoft 365 Copilot is officially live in many organisations. Licences bought, pilots run, internal comms sent. Yet most employees still open blank Word docs, scroll through endless email threads, and search SharePoint by hand. Leaders are starting to ask: What is the value we get from this investment?

This isn’t a technology problem. It’s a work problem. And we can fix it!

Independent studies and government pilots are already showing roughly 30–40% time savings on first drafts and 20–30 minutes saved per long document when people actually use Copilot properly. The gap is not in the potential. It’s in how we introduce it into everyday work.

This article demystifies where Copilot really creates value, why usage is lagging, and what leaders can do to turn licences into impact.


Why Value Isn’t Showing Up

Four issues usually kill Copilot value:

1. People don’t know what it’s for
Most employees have heard the AI story, but can’t answer a basic question: “When and How, in my day, should I use Copilot?” Without clear scenarios and simple guidance, the Copilot icon is just another button.

2. Old habits beat new tools
People know how to push through work the old way: write from scratch, forward emails, dig through folders. Some are already comfortable with ChatGPT in a browser and don’t see why they should change.

3. It’s treated as an IT rollout, not a work redesign
Turning Copilot on in Word, Outlook and Teams is easy. Redesigning how your organisation drafts documents, runs meetings and finds information is hard. Too many programmes stop after the feature is turned on.

4. Governance anxiety stalls decisions
Security, legal and compliance teams see real risk: data exposure, poor-quality outputs, regulatory questions. Without clear guardrails, the safest option is to keep Copilot locked in “pilot” mode.

The upside: these are leadership and design issues, not technical limitations. That means they can be solved.


Where Copilot Actually Delivers: Five Everyday Value Zones

The biggest, most reliable gains so far cluster around five very familiar patterns of knowledge work.

1. Kill the blank page: 0 to 60% in minutes

Impact: Fast first drafts for documents, decks and emails.

Copilot shines when you ask it to get you from nothing to a solid starting point:

  • Strategy papers, board packs, proposals, policies in Word
  • First-cut slide decks in PowerPoint from a brief or source document
  • Long or nuanced emails in Outlook

This “let Copilot write the ugly first draft” consistently shows the largest time savings and strong perceived quality improvements.

2. Turn every meeting into instant documentation

Impact: Decisions, actions and risks captured without a human scribe.

In Teams meetings, Copilot can:

  • Produce a structured summary
  • Pull out decisions, risks and action items
  • Answer questions afterwards: “What did we agree about X?”

This use case is easy to explain. Nobody wants to take minutes; everyone benefits from clear follow-up. In early pilots, meeting summarisation is one of the most frequently used and highest-rated features.

3. Find the right document, not just a document

Impact: Reduce time wasted hunting for information across Outlook, Teams and SharePoint.

Knowledge workers spend a serious chunk of their week just looking for things. Microsoft 365 Chat turns Copilot into a cross-suite concierge:

  • “Summarise what we know about client Y.”
  • “Show me the latest approved deck for product X.”
  • “What did we decide last quarter on pricing for Z?”

When your content already lives in Microsoft 365, this “ask before you search” habit cuts through version chaos and gives people back time and focus.

4. Manage email overload

Impact: Faster triage, clearer responses, less mental drag.

Copilot won’t solve email, but it makes it more manageable:

  • Summarising long threads so you can decide quickly what matters
  • Drafting responses and adjusting tone
  • Cleaning up structure and language

The per-email time saving might be modest, but the reduction in cognitive load is real. Copilot helps you get through the noise and focus on the handful of messages that need your judgment.

5. Accelerate light analysis and reporting in Excel

Impact: Quicker insights and recurring reports from structured data.

In Excel, Copilot can:

  • Explain what’s going on in a dataset
  • Suggest ways to slice the data
  • Create charts and narratives
  • Speed up recurring performance or KPI reporting

This is high-value but not plug-and-play. It works best with reasonably clean data and users who understand the business context. Think of it as a force multiplier for analysts and power users, not a magic button.

In short, Copilot’s sweet spot today is writing, summarising and searching across your existing Microsoft estate, plus selected analytical scenarios for more advanced users.


What Successful Organisations Do Differently

Organisations that are getting real value from Copilot have a few things in common.

They start from work, not from the tool
They don’t launch with “we’re rolling out Copilot”. They start with “we want better strategy papers, better client proposals, better governance packs” – and then show how Copilot changes how those artefacts are produced.

They build Copilot into the flow of work
Instead of creating a separate “AI zone”, they embed Copilot where work already happens: inside Teams meetings, in their intranet, alongside existing forms and workflows. People don’t go to Copilot; Copilot meets people in the tools they use all day.

They invest in skills and champions
They replace generic AI awareness sessions with short, scenario-based training: “Here’s how we now write our monthly report with Copilot.” They build champion networks in each function – credible people who share prompts, examples and tips in context.

They create guardrails instead of red tape
Risk, security and legal are involved early. Data access is configured carefully. Simple rules are agreed: always review outputs; don’t paste in external confidential data; use human judgment on important decisions

Where leaders design Copilot into real work, usage scales. Where they simply procure it, usage stalls.


From Licences to Value: A Practical Plan

The first move is to be selective about where Copilot should create value. Instead of “rolling it out to everyone”, ask: where does knowledge work hurt most today? For most organisations that’s strategy documents and board packs, major client proposals, heavy governance cycles, and monthly reporting. Map those pain points to the five value zones and choose a small set of anchor use cases – for example, first drafts for leadership papers, meeting summaries for key forums, and cross-tenant search for major programmes.

The second move is to design the experience around those use cases. Be concrete: who uses Copilot, in which app, at what moment, and for what output. Replace generic AI briefings with sessions where teams produce real work with Copilot in the loop: a live board paper, a deal review, a performance report. People see their own content, just created differently. At the same time, identify a few credible champions in each area who experiment, refine prompts, and share examples with their colleagues.

The third move is to make experimentation feel safe. Bring risk, security and legal into the conversation early to agree which repositories Copilot can access, where restrictions apply, and a few simple rules: outputs are always reviewed, highly sensitive external information isn’t pasted into prompts, and human judgment remains the final step on important decisions. Communicate this in plain language. Clear boundaries do more for adoption than long policy decks; when people know the rules, they’re much more willing to try new ways of working.

The final move is to measure what matters and iterate. A small set of indicators is enough: time to first draft, time to prepare key meetings, time spent searching, plus self-reported usefulness and quality. Combine those with a few concrete stories – the board pack done in half the time, the proposal turned around in a day, the project review where nobody had to take notes – and you have the basis to decide where to extend licences, where to deepen training, and where to adjust governance. Over a few cycles, Copilot stops being “an AI project” and becomes part of how work gets done.


The winners in the Copilot era won’t be those with the most licences. They’ll be those who embed Copilot into daily work – better drafts, better meetings, better decisions.

Start with three things: pick your use cases, brief your champions, and decide how you’ll measure success.

How to use AI whilst keeping your Data Private and Safe

AI can pay off quickly—copilots that accelerate knowledge work, smarter customer operations, and faster software delivery. The risk is not AI itself; it is how you handle data. Look at privacy (what you expose), security (who can access), compliance (what you can prove), and sovereignty (where processing happens) as separate lenses. The playbook is simple: classify the data you’ll touch; choose one of four deployment models; apply a few guardrails—identity, logging, and simple rules people understand; then measure value and incidents. Start “as open as safely possible” with the less sensitive cases for speed, and move to tighter control as sensitivity increases.


What “Private & Safe” actually means

Private and safe AI means using the least amount of sensitive information, tightly controlling who and what AI can access, proving that your handling meets legal and industry obligations, and ensuring processing happens in approved locations. In practice you minimise exposure, authenticate users, encrypt and log activity, and keep a clear record of decisions and data flows so auditors and customers can trust the outcome.

To make this work across the enterprise, bring the right people together around each use case. The CIO and CISO own the platform choices and controls; the CDO curates which data sources are approved; Legal sets lawful use and documentation; business owners define value and success; HR and Works Council get involved where employee data or work patterns change. Run a short, repeatable intake: describe the use case, identify the data, select the deployment model, confirm the controls, and agree how quality and incidents will be monitored.


How to classify “Sensitive Data” – a simple four-tier guide

Not all data is equal. Classifying it upfront tells you how careful you need to be and which setup to use.

Tier 1 – Low sensitivity. Think public information or generic content such as first drafts of marketing copy. Treat this as the training ground for speed: use packaged tools, keep records of usage, and avoid connecting unnecessary internal sources.

Decision check: “Could this appear on our website tomorrow?”Yes = Tier 1

Tier 2 – Internal. Everyday company knowledge—policy summaries, project notes, internal wikis. Allow AI to read from approved internal sources, but restrict access to teams who need it and retain basic logs so you can review what was asked and answered.

Decision check: “Would sharing this externally require approval?”Yes = Tier 2+

Tier 3 – Confidential. Material that would harm you or your customers if leaked—client lists, pricing models, source code. Use controlled company services that you manage, limit which repositories can be searched, keep detailed activity records, and review results for quality and leakage before scaling.

Decision check: “Would leakage breach a contract or NDA?”Yes = Tier 3+

Tier 4 – Restricted or regulated. Legally protected or mission-critical information—patient or financial records, trade secrets, M&A. Run in tightly controlled environments you operate, separate this work from general productivity tools, test thoroughly before go-live, and document decisions for auditors and boards.

Decision check: “Is this regulated or business-critical?”Yes = Tier 4


Common mistakes – and how to fix them

Using personal AI accounts with company data.
This bypasses your protections and creates invisible risk. Make it company accounts only, block personal tools on the network, and provide approved alternatives that people actually want to use.

Assuming “enterprise tier” means safe by default.
Labels vary and settings differ by vendor. Ask for clear terms: your questions and documents are not used to improve public systems, processing locations are under your control, and retention of queries and answers is off unless you choose otherwise.

Building clever assistants without seeing what actually flows.
Teams connect documents and systems, then no one reviews which questions, files, or outputs move through the pipeline. Turn on logging, review usage, and allow only a short list of approved data connections.

Skipping basic training and a simple policy.
People guess what’s allowed, leading to inconsistent—and risky—behaviour. Publish a one-page “how we use AI here,” include it in onboarding, and name owners who check usage and costs.


AI Deployment Models

Model 1 — Secure packaged tools (fastest path to value).
Ready-made apps with business controls—ideal for broad productivity on low-to-moderate sensitivity work such as drafting, summarising, meeting notes, and internal Q&A. Examples: Microsoft Copilot for Microsoft 365, Google Workspace Gemini, Notion AI, Salesforce Einstein Copilot, ServiceNow Now Assist. Use this when speed matters and the content is not highly sensitive; step up to other models for regulated data or deeper system connections.

Model 2 — Enterprise AI services from major providers.
You access powerful models through your company’s account; your inputs aren’t used to train public systems and you can choose where processing happens. Well-suited to building your own assistants and workflows that read approved internal data. Examples: Azure OpenAI, AWS Bedrock, Google Vertex AI, OpenAI Enterprise, Anthropic for Business. Choose this for flexibility without running the underlying software yourself; consider Model 3 if you need stronger control and detailed records.

Model 3 — Managed models running inside your cloud.
The models and search components run within your own cloud environment, giving you stronger control and visibility while the vendor still manages the runtime. A good fit for confidential or regulated work where oversight and location matter. Examples: Bedrock in your AWS account, Vertex AI in your Google Cloud Platform, Azure OpenAI in your subscription, Databricks Mosaic AI, Snowflake Cortex. Use this when you need enterprise-grade control with fewer operational burdens than full self-hosting.

Model 4 — Self-hosted and open-source models.
You operate the models yourself—on-premises or in your cloud. This gives maximum control and sovereignty, at the cost of more engineering, monitoring, and testing. Suits the most sensitive use cases or IP-heavy R&D. Examples: Llama, Mistral, DBRX—supported by platforms such as Databricks, Nvidia NIM, VMware Private AI, Hugging Face, and Red Hat OpenShift AI. Use this when the business case and risk profile justify the investment and you have the talent to run it safely.


Building Blocks and How to Implement (by company size)

Essential Building blocks

A few building blocks change outcomes more than anything else. Connect AI to approved data sources through a standard “search-then-answer” approach—often called Retrieval-Augmented Generation (RAG), where the AI first looks up facts in your trusted sources and only then drafts a response.

This reduces the need to copy data into the AI system and keeps authority with your original records. Add a simple filter to remove personal or secret information before questions are sent. Control access with single sign-on and clear roles. Record questions and answers so you can review quality, fix issues, and evidence compliance. Choose processing regions deliberately and, where possible, manage your own encryption keys. Keep costs in check with team budgets and a monthly review of usage and benefits.

Large enterprises

Move fastest with a dual approach. Enable packaged tools for day-to-day productivity, and create a central runway based on enterprise AI services for most custom assistants. For sensitive domains, provide managed environments inside your cloud with the standard connection pattern, built-in filtering, and ready-made quality tests. Reserve full self-hosting for the few cases that genuinely need it. Success looks like rapid adoption, measurable improvements in time or quality, and no data-handling incidents.

Mid-market organisations

Get leverage by standardising on one enterprise AI service from their primary cloud, while selectively enabling packaged tools where they clearly save time. Offer a single reusable pattern for connecting to internal data, with logging and simple redaction built in. Keep governance light: a short policy, a quarterly review of model quality and costs, and a named owner for each assistant.

Small-Mid sized companies

Should keep it simple. Use packaged tools for daily work and a single enterprise AI service for tasks that need internal data. Turn off retention of questions and answers where available, restrict connections to a small list of approved sources, and keep work inside the company account—no personal tools or copying content out. A one-page “how we use AI here,” plus a monthly check of usage and spend, is usually enough.


What success looks like

Within 90 days, 20–40% of knowledge workers are using AI for routine tasks. Teams report time saved or quality improved on specific workflows. You have zero data-handling incidents and can show auditors your data flows, access controls, and review process. Usage and costs are tracked monthly, and you’ve refined your approved-tools list based on what actually gets adopted.

You don’t need a bespoke platform or a 200-page policy to use AI safely. You need clear choices, a short playbook, and the discipline to apply it.

Where AI Is Creating the Most Value (Q4 2025)

There’s still a value gap—but leaders are breaking away. In the latest BCG work, top performers report around five times more revenue uplift and three times deeper cost reduction from AI than peers. The common thread: they don’t bolt AI onto old processes—they rewire the work. As BCG frames it, the 10-20-70 rule applies: roughly 10% technology, 20% data and models, and 70% process and organizational change. That’s where most of the value is released.

This article is for leaders deciding where to place AI bets in 2025. If you’re past “should we do AI?” and into “where do we make real money?”, this is your map.


Where the money is (cross-industry)

1) Service operations: cost and speed
AI handles simple, repeatable requests end-to-end and coaches human agents on the rest. The effect: shorter response times, fewer repeat contacts, and more consistent outcomes—without sacrificing customer experience.

2) Supply chain: forecast → plan → move
The gains show up in fewer stockouts, tighter inventories, and faster cycle times. Think demand forecasting, production planning, and dynamic routing that reacts to real-world conditions.

3) Software and engineering: throughput
Developer copilots and automated testing increase release velocity and reduce rework. You ship improvements more often, with fewer defects, and free scarce engineering time for higher-value problems.

4) HR and talent: faster funnels and better onboarding/learning
Screening, scheduling, and candidate communication are compressed from days to hours. Internal assistants support learning and workforce planning. The results: shorter time-to-hire and better conversion through each stage.

5) Marketing and sales: growing revenue
Personalization, next-best-action, and on-the-fly content creation consistently drive incremental sales. This is the most frequently reported area for measurable revenue lift.

Leadership advice: Pick 2-3 high-volume processes (one cost, one revenue). Redesign the workflow, not just add AI on top. Set hard metrics (cost per contact, cycle time, revenue per visit) and a 90-day checkpoint. Industrialize what works; kill what doesn’t.


Sector spotlights

Consumer industries (Retail & Consumer Packaged Goods)

Marketing and sales.

  • Personalized recommendations increase conversion and basket size; retail media programs are showing verified incremental sales.
  • AI-generated marketing content reduces production costs and speeds creative iteration across markets and channels. Mondelez reported 30-50% reduction in marketing content production costs using generative AI at scale.
  • Campaign analytics that used to take days are produced automatically, so teams run more “good bets” each quarter.

Supply chain.

  • Demand forecasting sharpens purchasing and reduces waste.
  • Production planning cuts changeovers and work-in-progress.
  • Route optimization lowers distance traveled and fuel, improving on-time delivery.

Customer service.

  • AI agents now resolve a growing share of contacts end-to-end. Ikea AI agents now handle already 47% of all request so service people can offer more support on the other questions.
  • Agent assist gives human colleagues instant context and suggested next steps.
    The result is more issues solved on first contact, shorter wait times, and maintained satisfaction, provided clear hand-offs to humans exist for complex cases.

What to copy: Start with one flagship process in each of the three areas above; set a 90-day target; only then roll it across brands and markets with a standard playbook.


Manufacturing (non-pharma)

Predictive maintenance.
When tied into scheduling and spare-parts planning, predictive maintenance reduces unexpected stoppages and maintenance costs—foundational for higher overall equipment effectiveness (spelled out intentionally).

Computer-vision quality control.
In-line visual inspection detects defects early, cutting scrap, rework, and warranty exposure. Value compounds as models learn across lines and plants.

Production scheduling.
AI continuously rebalances schedules for constraints, changeovers, and demand shifts—more throughput with fewer bottlenecks. Automotive and electronics manufacturers report 5-15% throughput gains when AI-driven scheduling handles real-time constraints.

Move to scale: Standardize data capture on the line, run one “AI plant playbook” to convergence, then replicate. Treat models as line assets with clear ownership, service levels, and a retraining cadence.


Pharmaceuticals

R&D knowledge work.
AI accelerates three high-friction areas: (1) large evidence reviews, (2) drafting protocols and clinical study reports, and (3) assembling regulatory summaries. You remove weeks from critical paths and redirect scientists to higher-value analysis.

Manufacturing and quality.
Assistants streamline batch record reviews, deviation write-ups, and quality reports. You shorten release cycles and reduce delays. Govern carefully under Good Manufacturing Practice, with humans approving final outputs.

Practical tip: Stand up an “AI for documents” capability (standardized templates, automated redaction, citation checking, audit trails) before you touch lab workflows. It pays back quickly, proves your governance model, and reduces compliance risk when you expand to higher-stakes processes.


Healthcare providers

Augment the professional; automate the routine. Radiology, pathology, and frontline clinicians benefit from AI that drafts first-pass reports, triages cases, and pre-populates documentation. Northwestern Medicine studies show approximately 15.5% average productivity gains (up to 40% in specific workflows) in radiology report completion without accuracy loss. Well-designed oversight maintains quality while reducing burnout.

Non-negotiable guardrail: Clear escalation rules for edge cases and full traceability. If a tool can’t show how it arrived at a suggestion, it shouldn’t touch a clinical decision. Establish explicit human review protocols for any AI-generated clinical content before it reaches patients or medical records.


Financial services

Banking.

  • Service and back-office work: assistants summarize documents, draft responses, and reconcile data. JPMorgan reports approximately 30% fewer servicing calls per account in targeted Consumer and Community Banking segments and 15% lower processing costs in specific workflows.
  • Risk and compliance: earlier risk flags, smarter anti-money-laundering reviews, and cleaner audit trails reduce losses and manual rework.

Insurance.

  • Claims: straight-through processing for simple claims moves from days to hours.
  • Underwriting: AI assembles files and surfaces risk signals so underwriters focus on complex judgment.
  • Back office: finance, procurement, and HR automations deliver steady, compounding savings.

Leadership note: Treat service assistants and claims bots as products with roadmaps and release notes—not projects. That discipline keeps quality high as coverage expands.


Professional services (legal, consulting, accounting)

Document-heavy work is being rebuilt: contract and filing review, research synthesis, proposal generation. Well-scoped processes often see 40–60% time savings. . Major law firms report contract review cycles compressed from 8-12 hours to 2-3 hours for standard agreements, with associates redirected to judgment-heavy analysis and client advisory work.

Play to win: Build a governed retrieval layer over prior matters, proposals, and playbooks—your firm’s institutional memory—then give every practitioner an assistant that can reason over it.


Energy and utilities

Grid and renewables.
AI improves demand and renewable forecasting and helps balance the grid in real time. Autonomous inspections (drones plus computer vision) speed asset checks by 60-70% and reduce hazards. Predictive maintenance on critical infrastructure prevents outages—utilities report 20-30% reduction in unplanned downtime when AI is tied into work order systems and cuts truck rolls (field service visits).

How to scale: Start with one corridor or substation, prove inspection cycle time and fault detection, then expand with a standard data schema so models learn from every site.


Next Steps (practical and measurable)

1) Choose three processes—one for cost, one for revenue, one enabler.
Examples:

  • Cost: customer service automation, predictive maintenance, the month-end finance close.
  • Revenue: personalized offers, “next-best-action” in sales, improved online merchandising.
  • Enabler: developer assistants for code and tests, HR screening and scheduling.
    Write a one-line success metric and a quarterly target for each (e.g., “reduce average response time by 30%,” “increase conversion by 2 points,” “ship weekly instead of bi-weekly”).

2) Redesign the work, not just the process map.
Decide explicitly: what moves to the machine, what stays with people, where the hand-off happens, and what the quality gate is. Train for it. Incentivize it.

3) Industrialize fast.
Stand up a small platform team for identity, data access, monitoring, and policy. Establish lightweight model governance. Create a change backbone (playbooks, enablement, internal communications) so each new team ramps faster than the last.

4) Publish a value dashboard.
Measure cash, not demos: cost per contact, cycle time, on-shelf availability, release frequency, time-to-hire, revenue per visit. Baseline these metrics before launch—most teams skip this step and cannot prove impact six months later when challenged. Review monthly. Retire anything that doesn’t move the number.

5) Keep humans in the loop where it matters.
Customer experience, safety, financial risk, and regulatory exposure all require clear human decision points. Automate confidently—but design escalation paths from day one.


Final word

In 2025, AI pays where volume is high and rules are clear (service, supply chain, HR, engineering), and where personalization drives spend (marketing and sales). The winners aren’t “using AI.” They are re-staging how the work happens—and they can prove it on the P&L.

From AI-Enabled to AI-Centered – Reimagining How Enterprises Operate

Enterprises around the world are racing to deploy generative AI. Yet most remain stuck in the pilot trap; experimenting with copilots and narrow use cases while legacy operating models, data silos, and governance structures stay intact. The results are incremental: efficiency gains without strategic reinvention.

With the rapidly developing context aware AI we also can chart different course — making AI not an add-on, but the center of how the enterprise thinks, decides, and operates. This shift, captured powerfully in The AI-Centered Enterprise (ACE) by Ram Bala, Natarajan Balasubramanian, and Amit Joshi (IMD), signals the next evolution in business design: from AI-enabled to AI-centered.

The premise is bold. Instead of humans using AI tools to perform discrete tasks, the enterprise itself becomes an intelligent system, continuously sensing context, understanding intent, and orchestrating action through networks of people and AI agents. This is the next-generation operating model for the age of context-aware intelligence and it will separate tomorrow’s leaders from those merely experimenting today.


What an AI-Centered Enterprise Is

At its core, an AI-centered enterprise is built around Context-Aware AI (CAI), systems that understand not only content (what is being said) but also intent (why it is being said). These systems operate across three layers:

  • Interaction layer: where humans and AI collaborate through natural conversation, document exchange, or digital workflow.(ACE)
  • Execution layer: where tasks and processes are performed by autonomous or semi-autonomous agents.
  • Governance layer: where policies, accountability, and ethical guardrails are embedded into the AI fabric.

The book introduces the idea of the “unshackled enterprise” — one no longer bound by rigid hierarchies and manual coordination. Instead, work flows dynamically through AI-mediated interactions that connect needs with capabilities across the organization. The result is a company that can learn, decide, and act at digital speed — not by scaling headcount, but by scaling intelligence.

This is a profound departure from current “AI-enabled” organizations, which mostly deploy AI as assistants within traditional structures. In an AI-centered enterprise, AI becomes the organizing principle, the invisible infrastructure that drives how value is created, decisions are made, and work is executed.


How It Differs from Today’s Experiments

Today’s enterprise AI landscape is dominated by point pilots and embedded copilots: productivity boosters designed onto existing processes. They enhance efficiency but rarely transform the logic of value creation.

An AI-centered enterprise, by contrast, rebuilds the transaction system of the organization around intelligence. Key differences include:

  • From tools to infrastructure: AI doesn’t automate isolated tasks; it coordinates entire workflows; from matching expertise to demand, to ensuring compliance, to optimizing outcomes.
  • From structured data to unstructured cognition: Traditional analytics rely on structured databases. AI-centered systems start with unstructured information (emails, documents, chats) extracting relationships and meaning through knowledge graphs and retrieval-augmented reasoning.
  • From pilots to internal marketplaces: Instead of predefined processes, AI mediates dynamic marketplaces where supply and demand for skills, resources, and data meet in real time, guided by the enterprise’s goals and policies.

The result is a shift from human-managed bureaucracy to AI-coordinated agility. Decision speed increases, friction falls, and collaboration scales naturally across boundaries.


What It Takes: The Capability and Governance Stack

The authors of The AI-Centered Enterprise propose a pragmatic framework for this transformation, the 3Cs: Calibrate, Clarify, and Channelize.

  1. Calibrate – Understand the types of AI your business requires. What decisions depend on structured vs. unstructured data? What precision or control is needed? This step ensures technology choices fit business context.
  2. Clarify – Map your value creation network: where do decisions happen, and how could context-aware intelligence change them? This phase surfaces where AI can augment, automate, or orchestrate work for tangible impact.
  3. Channelize – Move from experimentation to scaled execution. Build a repeatable path for deployment, governance, and continuous improvement. Focus on high-readiness, high-impact areas first to build credibility and momentum.

Underneath the 3Cs lies a capability stack that blends data engineering, knowledge representation, model orchestration, and responsible governance.

  • Context capture: unify data, documents, and interactions into a living knowledge graph.
  • Agentic orchestration: deploy systems of task, dialogue, and decision agents that coordinate across domains.
  • Policy and observability: embed transparency, traceability, and human oversight into every layer.

Organizationally, the AI-centered journey requires anchored agility — a balance between central guardrails (architecture, ethics, security) and federated innovation (business-owned use cases). As with digital transformations before it, success depends as much on leadership and learning as on technology.


Comparative Perspectives — and Where the Field Is Heading

The ideas in The AI-Centered Enterprise align with a broader shift seen across leading research and consulting work, a convergence toward AI as the enterprise operating system.

McKinsey: The Rise of the Agentic Organization

McKinsey describes the next evolution as the agentic enterprise; organizations where humans work alongside fleets of intelligent agents embedded throughout workflows. Early adopters are already redesigning decision rights, funding models, and incentives to harness this new form of distributed intelligence.
Their State of AI 2025 shows that firms capturing the most value have moved beyond pilots to process rewiring and AI governance, embedding AI directly into operations, not as a service layer.

BCG: From Pilots to “Future-Built” Firms

BCG’s 2025 research (Sep 2025) finds that only about 5% of companies currently realize sustainable AI value at scale. Those that do are “future-built”, treating AI as a capability, not a project. These leaders productize internal platforms, reuse components across business lines, and dedicate investment to AI agents, which BCG estimates already generate 17% of enterprise AI value, projected to reach nearly 30% by 2028.
This mirrors the book’s view of context-aware intelligence and marketplaces as the next sources of competitive advantage.

Harvard Business Review: Strategy and Human-AI Collaboration

HBR provides the strategic frame. In Competing in the Age of AI, Iansiti and Lakhani show how AI removes the traditional constraints of scale, scope, and learning, allowing organizations to grow exponentially without structural drag. Wilson and Daugherty’s Collaborative Intelligence adds the human dimension, redefining roles so that humans shift from operators to orchestrators of intelligent systems.

Convergence – A New Operating System for the Enterprise

Across these perspectives, the trajectory is clear:

  • AI is moving from tools to coordination system capabilities.
  • Work will increasingly flow through context-aware agents that understand intent and execute autonomously.
  • Leadership attention is shifting from proof-of-concept to operating-model redesign: governance, role architecture, and capability building.
  • The competitive gap will widen between firms that use AI to automate tasks and those that rebuild the logic of their enterprise around intelligence.

In short, the AI-centered enterprise is not a future vision — it is the direction of travel for every organization serious about reinvention in the next five years.


The AI-Centered Enterprise – A Refined Summary

The AI-Centered Enterprise (Bala, Balasubramanian & Joshi, 2025) offers one of the clearest playbooks yet for this new organisational architecture. The authors begin by defining the limitations of today’s AI adoption — fragmented pilots, structured-data basis, and an overreliance on human intermediaries to bridge data, systems, and decisions.

They introduce Context-Aware AI (CAI) as the breakthrough: AI that understands not just information but the intent and context behind it, enabling meaning to flow seamlessly across functions. CAI underpins an “unshackled enterprise,” where collaboration, decision-making, and execution happen fluidly across digital boundaries.

The book outlines three core principles:

  1. Perceive context: Use knowledge graphs and natural language understanding to derive meaning from unstructured information — the true foundation of enterprise knowledge.
  2. Act with intent: Deploy AI agents that can interpret business objectives, not just execute instructions.
  3. Continuously calibrate: Maintain a human-in-the-loop approach to governance, ensuring AI decisions stay aligned with strategy and ethics.

Implementation follows the 3C framework — Calibrate, Clarify, Channelize — enabling leaders to progress from experimentation to embedded capability.

The authors conclude that the real frontier of AI is not smarter tools but smarter enterprises; organizations designed to sense, reason, and act as coherent systems of intelligence.


Closing Reflection

For executives navigating transformation, The AI-Centered Enterprise reframes the challenge. The question is no longer how to deploy AI efficiently, but how to redesign the enterprise so intelligence becomes its organizing logic.

Those who start now, building context-aware foundations, adopting agentic operating models, and redefining how humans and machines collaborate, will not just harness AI. They will become AI-centered enterprises: adaptive, scalable, and truly intelligent by design.