Logistics Digital Twins: How Road + Warehouse Twins End the Rush-Shipment Trap and Protect Margin

Rush shipments are not a logistics problem. It’s a warehouse planning problem that logistics pays for. The pattern is predictable: the warehouse plan breaks, the organization compensates with speed premium carriers, split shipments, overtime, last-minute routing. People become heroes for covering mistakes. Over time, rush shipments become the default recovery mechanism; structural waste disguised as operational excellence

That’s why the road/warehouse logistics digital twin matters. Not only because it finds a better route, but because it prevents urgency from becoming structural. It synchronizes transport, appointments, dock capacity, labor availability, and execution priorities around the same operational truth, so you plan for flow first, and only then use speed when it truly pays back.

(Note: This Part 3 in my Digital Twin series is about the micro shocks that hit every hour on the loading dock and drain margin. Part 2 dealt with macro shocks and ship–port synchronization)


The prize: cost-to-serve discipline, fewer margin shocks

1) Fewer premium moves and tighter cost-to-serve control.
On land, variability becomes cost leakage fast. The twin reduces premium transport and “recovery spend” by preventing the avoidable failures upstream: dock gridlock, wave collapse, and labor mismatch. In many networks, a meaningful share of premium freight is reactive recovery, moves that wouldn’t have been needed if the planned flow had held together.

2) Reliable promises without over-serving everyone.
The twin makes service levels real. Instead of trying to rescue every order with the same urgency, you protect the critical shipments and re-promise early for the rest, improving trust while reducing expensive heroics.

3) Labor volatility becomes manageable, not chaotic.
In many networks, labor availability and skills mix is the constraint. A twin treats labor as a clear planning input, so the day’s plan is realistic before execution begins.


Four issues where the Digital Twin can help you!

1. The rush-shipment spiral

A small delay inside the warehouse cascades into a premium spend outside it. The chain reaction is predictable: inbound arrives late >> waves slip >> outbound cutoffs >> operations split loads, upgrade carriers, or dispatch partials >> costs spike and service still becomes fragile.

A twin breaks this spiral by making trade-offs explicit early. It identifies which orders to protect, which to re-promise, where consolidation still works, and when a premium move is justified.

2. Waiting for dock availability

Trucks wait because docks are full, paperwork isn’t ready, labor is short, or the yard can’t sequence efficiently. These costs are fragmented—carriers charge, sites absorb, customers complain—so they often remain invisible at enterprise level.

A twin reduces detention by synchronizing three truths: what is arriving, what capacity is actually available, and what should be prioritized. It rebalances appointments as reality changes so arrivals match real readiness.

3. The labor mismatch cascade

Many sites have capacity until they don’t, because labor coverage and skills mix fluctuate. A 10–15% shortfall in scarce roles can destroy throughput far more than the same shortfall elsewhere. The late discovery leads to overtime, shortcuts, quality issues, and rework and often triggers premium transport to protect cutoffs.

A twin treats labour fill rate and skills coverage as first-class constraints. It reshapes waves, priorities, and dock sequencing early, instead of discovering during the day that the plan was never feasible. The result is less overtime volatility and fewer last-minute rescues.

4. The inventory/flow-path trap

This is where cost-to-serve stops being a spreadsheet exercise. You consolidate inventory at a regional DC to reduce handling costs. It works until a demand spike forces cross-country expediting because the stock is now 1,200 miles away. Or inbound gets sent to put-away instead of cross-dock because “we have space,” but demand materializes before replenishment runs, triggering split shipments and premium moves.

These are flow-path decisions that create transport liabilities. A twin makes the trade-off explicit in real time: hold vs move, cross-dock vs put-away, split vs consolidate—based on actual margin impact, not yesterday’s flow logic.


Example: Monday morning peak

A promotion week starts with volume above plan and labor fill rate 15% short. Without a twin, appointments stay static while ETAs shift, congestion builds in the yard and at the docks, and the wave plan runs “as designed” until it collapses under backlog. Outbound cutoffs turn red, operations split loads and activate premium carriers, overtime spikes, and service still becomes fragile. The cost spike is then rationalized as “the cost of peaks.”

With a twin, the day starts differently. The labor shortfall is treated as the binding constraint at the start of shift, appointments are rebalanced to smooth peaks and protect critical inbound and outbound flows, and dock sequencing is reshaped around true cutoff risk rather than yesterday’s plan. Waves and labor priorities are adjusted early and some orders are re-promised explicitly, so premium moves are targeted and justified and overtime becomes deliberate rather than chaotic. The outcome isn’t “no disruption.” It’s fewer premium moves, less overtime volatility, and a controlled service impact instead of a margin surprise.

So what does it take to make this real?


What it takes

Three things separate this from spreadsheet planning:

(1) Decision-grade data/insights on labor coverage, dock state, and appointment flow, not just transport ETAs.

(2) Decision logic that is fast enough to replan before chaos locks in.

(3) Clear authority on who can adjust appointments, waves, and promises when constraints change.


KPIs: 3 north stars (and a small supporting set)

1) Premium freight rate — % of shipments and % of spend that is premium/expedited.
2) Cost-to-serve variance by segment — which customers/products/orders are unprofitable once recovery effort is included.
3) Labor productivity under volatility — throughput per labor hour during peaks, plus overtime volatility.

Supporting diagnostics: detention/dwell, missed cutoffs, and plan adherence under stress (how often you stayed in controlled flow vs reverted to heroics).


How to implement without boiling the ocean

Start at one site where premium spend, detention, and service shortfalls already visible and measurable, this creates a clear baseline and fast credibility. Then make the key operational signals decision-grade: labour coverage and skills mix, appointment flow, dock state, backlog, and cutoff risk. Next, define simple rules that make trade-offs explicit, especially when to re-promise versus when to expedite, tied to service tier and margin. From there, close the loop into the daily operating cadence by connecting those rules to wave replanning, dock sequencing, and appointment adjustments as reality changes. Finally, export the commitments you can now trust into the enterprise layer (which I will address  in Part 4 of my series), so network orchestration is built on real constraints rather than assumed averages.


The questions executives should ask

  1. What percentage of our premium freight spend is planned vs reactive?
  2. Which shipments are profitable on paper but unprofitable after recovery cost?
  3. Do we re-promise early by policy or do we “save it” with premium transport by habit?
  4. Are labor planning and operational planning aligned or still separate?
  5. Do our incentives reward hitting service at any cost, or hitting service and margin?

Logistics Digital Twins: Why Now and Why the Hub Is the Starting Point

Most leadership teams don’t suffer from a lack of logistics data. They suffer from a lack of decision ready insights.

You may know where containers are, which trucks are late, and which distribution center is backed up. Yet the response still looks familiar: expediting, overtime, buffer inventory, manual replanning, escalation calls, and operational heroics.

This is the first article in a four-part series on logistics digital twins, how they move logistics from visibility to control, why the hub is the logical starting point, and how to scale from hub twins to enterprise orchestration.


Why this series now

Four forces are converging:

1) Cost-to-serve is under pressure in places leaders don’t always see
Detention, premium freight, missed cutoffs, overtime volatility, rework, and buffer inventory can look like “operational noise.” At scale, they shape margin and working capital far more than most planning discussions acknowledge.

2) Service expectations are rising while tolerance for buffers is shrinking
Customers expect tighter and more reliable delivery promises. Meanwhile, the classic insurance policies, extra inventory, spare capacity, and manual intervention—have become expensive.

3) Volatility has become structural
Congestion, weather events, labour constraints, and capacity swings are no longer exceptions. In many networks they are the baseline, they ripple across modes and hubs faster than traditional weekly planning cycles can absorb.

4) Sustainability is moving from reporting to operations
The biggest emission levers in logistics are operational: waiting vs flowing, routing, mode selection, idling, rehandling, and expediting. You cannot manage carbon seriously without managing variability seriously.


The value of logistics digital twins

Service reliability. A logistics digital twin improves the credibility of your promises by continuously reconciling plan versus reality. Instead of relying on averages, it helps you anticipate bottlenecks and protect cutoffs, so customer commitments become more stable and exceptions become less frequent.

Cost-to-serve and productivity. Twins reduce the hidden costs of variability: queues, idling, rework, overtime spikes, and premium transport decisions made under pressure. Over time, they turn constrained assets, labour, docks, cranes, yards, into capacity you can actually plan against.

Resilience. A twin gives you a repeatable way to respond to disruptions. You can test scenarios, predefine playbooks, and replan faster, reducing reliance on ad-hoc escalation and individual heroics.

Sustainability. By reducing waiting, unnecessary speed-ups, and expediting, twins cut emissions where it matters most—inside day-to-day operations. Just as importantly, they make trade-offs explicit: service vs cost vs carbon, supported by data rather than intuition.


What a logistics digital twin is

A logistics digital twin is a closed-loop system that links real-time logistics events to prediction, simulation, and optimization, so decisions improve continuously across hubs, flows, and the wider network.

What it isn’t:

  • A 3D visualization
  • A dashboard-only control tower
  • A big-bang model of everything

If the twin doesn’t change decisions, it’s not a twin. It’s reporting.


Where the technology stands today

Mature and accelerating. The foundational building blocks are now broadly available: event streaming from operational systems, predictive models for ETAs and handling-time variability, simulation to stress-test plans, and optimization to sequence scarce resources. AI is also improving the speed and quality of replanning, especially in exception handling and dynamic decision support.

Still hard (and why programs stall). The toughest challenges are cross-party data access and identity matching, proving models are decision-grade, and getting decision rights and operating rhythms clear. In practice, governance of decisions matters as much as governance of data.


The three layers of logistics digital twins

  • Hub twins: ports, terminals, DCs; manage capacity, queues, sequencing, labor and equipment.
  • Flow layer: between hubs; manage ETA variability, corridor constraints, routing under disruption.
  • Orchestration twin: across the network; manage allocation, promise logic, mode switching, scenarios, and network design choices.

This series starts at the hub for a reason.


Why it’s logical to start at the hub level

When companies say “we want an end-to-end digital twin,” they usually mean well and then get stuck.

The fastest path to value is to begin at the hub level because hubs offer four advantages:

1) You can control outcomes. Hubs have clear operational levers: sequencing, scheduling, prioritization, and resource deployment. When those decisions improve, results show up quickly in throughput, dwell time, and service reliability.

2) Data is more attainable. Hub data typically sits in a smaller number of systems with clearer ownership. That is a far easier starting point than cross-company, end-to-end integration.

3) Hub wins compound across the network. A reliable hub stabilizes upstream and downstream. If arrivals are smoother and throughput is predictable, you reduce knock-on effects across transport legs.

4) Orchestration depends on commitments, not guesses. Enterprise orchestration only works if hubs provide credible capacity and timing commitments. Otherwise the network plan is built on wishful thinking.

If you remember one line from this article, make it this: If you can’t predict and control your hubs, your network twin will only automate bad assumptions.


The minimum viable twin (how to start without boiling the ocean)

A minimum viable logistics digital twin has five ingredients:

  1. A short list of critical events you can capture reliably
  2. A state model that represents capacity, queues, backlog, and resources
  3. A decision loop with a replanning cadence and exception triggers
  4. Clear decision rights: who can override what, and when
  5. Two or three KPIs leadership will sponsor and use consistently

The most reliable way to get traction is to pick one flagship hub use case and scale from there.

In the next two articles, we’ll look at two examples: sea freight and ports (high constraints, many actors), and road transport and warehouses (high frequency, direct cost-to-serve impact). We’ll close with orchestration and network design—where “run” data replaces assumed averages.

AI in 2026: From Experimentation to Implementation

2026 will mark the transition from AI experimentation to pragmatic implementation with significant emphasis on return on investment, governance, and agentic AI systems. The hype bubble has deflated, replaced by hard-nosed business requirements and measurable outcomes. CFOs become AI gatekeepers, speculative pilots get killed, and the discussion moves to “which AI projects drive profit?” In that context, strategic shifts matter most for boards and executive teams—and seven conditions will separate winners from the rest.


Shift 1 – From Hype to Hard Work: AI Factories in an ROI-Driven World

The first shift is financial discipline. Analysts expect enterprises will defer roughly 25% of planned AI spend into 2027 as CFOs insist on clear value, not proof-of-concept experiments. Only a small minority of organisations can currently point to material EBIT impact from AI, despite wide adoption.

The era of “let’s fund ten pilots and see what sticks” is ending. Funding flows to organisations that behave more like AI factories: they standardise how use cases are sourced, evaluated, industrialised and governed, with shared platforms rather than bespoke experiments.

What this means for leadership in 2026

  • Every AI initiative needs explicit, P&L-linked metrics (revenue, cost, margin) and a timebox for showing impact.
  • Expect your CFO to become a co-owner of the AI portfolio—approving not just spend, but the value logic.
  • The key maturity question is shifting from “Do we use AI?” to “How many AI use cases are scaled, reused and governed?”

Shift 2 – AI Teammates in Every Role: Work Gets Re-Architected

By the end of 2026, around 40% of enterprise applications are expected to embed task-specific AI agents, and a similar share of roles will involve working with those agents. These are not just chatbots; they are digital colleagues handling end-to-end workflows in sales, service, finance, HR and operations.

Research from McKinsey and BCG suggests a simple rule of thumb: successful AI transformations are roughly 10% algorithms, 20% technology and data, and 70% people and processes. High performers are three times more likely to fundamentally redesign workflows than to automate existing ones.

What this means for leadership in 2026

  • Ask less “Which copilot can we roll out?” and more “What would this process look like if we assumed agents from day one?”
  • Measure success in cycle time, error rates and processes eliminated, not just productivity per FTE.
  • Treat “working effectively with agents” as a core competency for managers and professionals.

Shift 3 – New Org Structures: CAIOs, AI CoEs and Agent Ops

As AI moves into the core of the business, organisational design is following. A small but growing share of large companies now appoint a dedicated AI leader (CAIO or equivalent), accountable for turning AI strategy into business outcomes and for managing risk.

The workforce pyramid is shifting as well. Entry-level positions are “quietly disappearing”—not through layoffs, but through non-renewal—while AI-skilled workers command wage premiums of 50%+ in some markets and rising.

This drives three structural moves:

  • AI Centres of Excellence evolve from advisory teams into delivery engines that provide reference architectures, reusable agents and enablement.
  • “Agent ops” capabilities emerge—teams tasked with monitoring, tuning and governing fleets of agents across the enterprise.
  • Career paths split between traditional functional tracks and “AI orchestrator” tracks.

What this means for leadership in 2026

  • Clarify who owns AI at ExCo level—and whether they have the mandate to say no as well as yes.
  • Ensure your AI CoE is set up to ship and scale, not just write guidelines.
  • Start redesigning roles, spans of control and career paths on the assumption that agents will take over a significant share of routine work.

Shift 4 – Governance and Risk: From Optional to Existential

By the end of 2026, AI governance will be tested in courtrooms and regulators’ offices, not only in internal committees. Analysts expect thousands of AI-related legal claims globally, with organisations facing lawsuits, fines and in some cases leadership changes due to inadequate governance.

At the same time, frameworks like the EU AI Act move to enforcement, particularly in high-risk domains such as healthcare, finance, HR and public services. In parallel, many organisations are introducing “AI free” assessments to counter concerns about over-reliance and erosion of critical thinking.

What this means for leadership in 2026

  • Treat AI as a formal risk class alongside cyber and financial risk, with explicit classifications, controls and reporting.
  • Expect to demonstrate traceability, explainability and human oversight for consequential use cases.
  • Recognise that governance failures can quickly become CEO- and board-level issues, not just CIO problems.

Shift 5 – The Data Quality Bottleneck

The fifth shift is about the constraint that matters most: data quality. Across multiple sources, “AI-ready data” emerges as the primary bottleneck. Companies that neglect it could see productivity losses of 15% or more, with widespread AI initiatives missing their ROI targets due to poor foundations.

Most companies have data. Few have AI-ready data: unified, well-governed, timely, with clear definitions and ownership.

On the infrastructure side, expect a shift from “cloud-first” to “cloud where appropriate,” with organisations seeking more control over cost, jurisdiction and resilience. On the environmental side, data-centre power consumption is becoming a visible topic in ESG discussions, forcing hard choices about which workloads truly deserve the energy and capital they consume.

What this means for leadership in 2026

  • Treat critical data domains as products with clear owners and SLAs, not as exhaust from processes and applications.
  • Make data readiness a gating criterion for funding AI use cases.
  • Infrastructure and model choices are now strategic bets, not just IT sourcing decisions.

Seven Conditions for Successful AI Implementation in 2026

Pulling these shifts together, here are seven conditions that separate winners from the rest:

FINANCIAL FOUNDATIONS

1. Financial discipline first

  • Tie every AI initiative to specific P&L metrics and realistic value assumptions.
  • Kill or re-scope projects that cannot demonstrate credible impact within 12–18 months.

2. Build an AI factory

  • Standardise how you source, prioritise and industrialise use cases.
  • Focus on a small number of high-value domains and build shared platforms and solution libraries instead of one-off solutions.

OPERATIONAL EXCELLENCE

3. Redesign workflows around agents (the 10–20–70 rule)

  • Assume that only 10% of success is the model and 20% is tech/data; the remaining 70% is people and process.
  • Measure progress in terms of processes simplified or eliminated, not just tasks automated.

4. Treat data as a product

  • Invest in “AI-ready data”: unified, well-governed, timely, with clear definitions and ownership.
  • Make data readiness a gating criterion for funding AI use cases.

5. Governance by design, not retrofit

  • Mandate governance from day one: model inventories, risk classification, human-in-the-loop for high-impact decisions.
  • Build transparency, explainability and audit trails into systems upfront.

ORGANISATIONAL CAPABILITY

6. Organise for AI: leadership, CoEs and agent operations

  • Clarify executive ownership (CAIO or equivalent), empower an AI CoE to execute, and stand up agent-ops capabilities to monitor and steer your digital workforce.

7. Commit to continuous upskilling

  • Assume roughly 44% of current skills will materially change over the next five years; treat AI literacy and orchestration skills as mandatory.
  • Invest more in upskilling existing talent than in recruiting “unicorns.”

The Bottom Line

The defining question for 2026 is no longer “Should we adopt AI?” but “How do we create measurable value from AI while managing its risks?”

The performance gap is widening fast: companies redesigning workflows are pulling three to five times ahead of those merely automating existing processes. By 2027, this gap will be extremely hard to close.

Boards and executive teams that answer this through focused implementation, genuine workflow redesign, responsible governance and continuous workforce development will set the pace for the rest of the decade. Those that continue treating AI as experimentation will find themselves competing against organisations operating at multiples of their productivity, a gap will be very hard to recover from.


Five AI Breakthroughs From 2025 That Will Show Up in Your P&L

A year ago, if you asked an AI to handle a complex customer refund, it might draft an email for you to send.

As 2025 comes to a close, AI agents in some organisations can now check the order history, verify the policy, process the refund, update several systems, and send the confirmation. That is not just a better copilot; it is a different category of capability.

Throughout 2025, the story has shifted from “we are running pilots” to where AI is quietly creating real value inside the enterprise: agents that execute multi-step workflows, voice AI that resolves problems end-to-end, multimodal AI that works on the messy mix of enterprise information, sector-specific applications in life sciences and healthcare, industrial and manufacturing, consumer industries and professional services, and more reliable systems that leaders are prepared to trust with high-stakes work.

This newsletter focuses on what is genuinely possible by the end of 2025 that was hard, or rare at the end of 2024 and where new value pools are emerging.


1. From copilots to autonomous workflows

End of 2024, most enterprise AI lived in copilots and Q&A over knowledge bases. You prompted; the system responded, one step at a time.

By the end of 2025, leading organisations are using AI agents that can run a full workflow: collect inputs, make decisions under constraints, act in multiple systems, and report back to humans at defined checkpoints. They combine memory (what has already been done), tool use (which systems to use), and orchestration (what to do next) in a way that was rare a year ago.

New value pools

  • Life sciences and healthcare: automating  start-up administration, safety case intake, and medical information requests so clinical and medical teams focus on judgement, not paperwork.
  • Industrial and manufacturing: agents handling order-to-cash or maintenance workflows end-to-end. From reading emails and work orders to updating ERP and scheduling technicians.
  • Professional services: agents that move proposals, statements of work, and deliverables through review, approval and filing, improving margin discipline and cycle time.

2. Voice AI as a frontline automation channel

At the end of 2024, voice AI mostly meant smarter voice responses: slightly better menus, obvious hand-offs to humans, and limited ability to handle edge cases.

By the end of 2025, voice agents can hold natural two-way conversations, look up context across systems in real time, and execute the simple parts of a process while the customer is still on the line. For a growing part of the call mix, “talking to AI” is now an acceptable – sometimes preferred – experience.

New value pools

  • Consumer industries: automating high-volume inbound queries such as order status, returns, bookings, and loyalty program questions, with seamless escalation for the calls that truly need an expert.
  • Life sciences and healthcare: patient scheduling, pre-visit questionnaires, follow-up reminders, and simple triage flows, integrated with clinical and scheduling systems.
  • Cross-industry internal support: IT and HR helpdesks where a voice agent resolves routine issues, captures clean tickets, and routes only non-standard requests to human staff.

3. Multimodal AI and enterprise information

Most early deployments of generative AI operated in a text-only world. The reality of large organisations, however, is multimodal: PDFs, decks, images, spreadsheets, emails, screenshots, sensor data, and more.

By the end of 2025, leading systems can read, interpret, and act across all of these. They can navigate screens, and combine text, tables, and images in a single reasoning chain. On the creation side, they can generate on-brand images and videos with consistent characters and scenes, good enough for many marketing and learning use cases.

New value pool

  • Life sciences and healthcare: preparing regulatory and clinical submission packs by extracting key data and inconsistencies across hundreds of pages of protocols, reports, and correspondence.
  • Industrial and manufacturing: combining images, sensor readings, and maintenance logs to flag quality issues or emerging equipment failures before they hit output.
  • Consumer and professional services: producing localised campaigns, product explainers, and internal training content in multiple languages and formats without linear increases in agency spend.

4. Sector-specific impact in the P&L

In 2024, many sector examples of AI looked impressive on slides but were limited in scope. By the end of 2025, AI is starting to move core economics in several industries.

In life sciences and healthcare, AI-driven protein and molecule modelling shortens early discovery cycles and improves hit rates, while diagnostic support tools help clinicians make better real-time decisions. In industrial and manufacturing businesses, AI is layered onto predictive maintenance, scheduling, and quality control to improve throughput and reduce downtime. Consumer businesses are using AI to personalise offers, content, and service journeys at scale. Professional services firms are using AI for research, drafting, and knowledge reuse.

New value pools

  • Faster innovation and time-to-market: from earlier drug discovery milestones to quicker design and testing cycles for industrial products and consumer propositions.
  • Operational excellence: higher asset uptime, fewer defects, better utilisation of people and equipment across plants, networks, and service operations.
  • Revenue and margin uplift: more profitable micro-segmentation in consumer industries, and higher matter throughput and realisation rates in professional and legal services.

5. When AI became trustworthy enough for high stakes work

Through 2023 and much of 2024, most organisations treated generative AI as an experiment.

By the end of 2025, two developments make it more realistic to use AI in critical workflows. First, dedicated reasoning models can work step by step on complex problems in code, data, or law, and explain how they arrived at an answer. Second, governance has matured: outputs are checked against source documents, policies are encoded as guardrails, and model risk is treated like any other operational risk.

New value pools

  • Compliance and risk: automated checks of policies, procedures, and documentation, with AI flagging exceptions and assembling evidence packs for human review.
  • Legal and contract operations: first pass drafts and review of contracts, research memos, and standard documents, with lawyers focusing on negotiation and high judgement work.
  • Financial and operational oversight: anomaly detection, narrative reporting, and scenario analysis that give CFOs and COOs a clearer view of where to intervene.

What this sets up for 2026

Everything above is the backdrop for 2026 – a year that will be less about experimentation and more about pragmatic implementation under real financial and regulatory scrutiny.

In my next newsletter, I will zoom in on:

  • Five strategic shifts – including the move from hype to “AI factories” with CFOs as gatekeepers, agents embedded in everyday roles, new organisational structures (CAIOs, AI CoEs, agent ops), governance moving from optional to existential, and the data-quality bottleneck that will decide who can actually scale.
  • Seven conditions for success – the financial, operational, and organisational foundations that separate companies who turn AI into EBIT from those who stay stuck in pilots.

Rather than extend this piece with another checklist, I will leave you with one question as 2025 closes:

Are you treating today’s AI capabilities as isolated experiments – or as the building blocks of the AI factory, governance, data foundations, and workforce that your competitors will be operating in 2026?

In the next edition, we will explore what it takes to answer that question convincingly.

Is There Still a Future for ERP & CRM in an AI-Driven Enterprise?

Why your next ERP/CRM decision is really about agents, data platforms, and money flows.

Most large organisations are in a similar place:

  • An ageing ERP landscape (often several instances)
  • Fragmented or underused CRM
  • Rapidly growing investments in cloud data platforms and AI
  • A board asking, “What’s our plan for the next 5–10 years?”

For the last two decades, the core question was simple:

Which suite do we standardise on?

In an AI- and agent-driven world, the question becomes more strategic:

Will our core really be ERP and CRM suites – or will it be data platforms and agents that just happen to talk to them?

From what I see in digital and AI transformations, three futures for ERP and CRM are emerging. They’re not mutually exclusive, but where you place your bets will shape your architecture, cost base and operating model for a decade.


Three futures for ERP & CRM

Option 1 – AI-Augmented ERP & CRM

In the first future, ERP and CRM remain your system of record and primary process engine.

The change comes from infusing them with AI:

  • Copilots and assistants embedded in finance, supply chain, HR, sales and service
  • Predictive models for forecasting, anomaly detection and planning
  • Built-in automation and recommendations inside the suite

The transformation journey is familiar: upgrade or replace core suites, rationalise processes, improve data, and switch on the AI capabilities that are now part of the platform.

The advantage is continuity: the mental model of “core systems” barely changes. The risk is spending heavily to recreate yesterday’s processes on a new, AI-decorated core.


Option 2 – Thin Core with an Agentic Front End

In the second future, ERP and CRM are still critical, but they are no longer the system of work people experience every day.

You introduce an agentic and workflow layer on top:

  • End-to-end journeys like lead-to-cash or source-to-pay are modelled and executed in this layer
  • Agents and orchestrated workflows call into ERP, CRM, HR and bespoke systems as needed
  • Employees increasingly interact with unified workspaces and conversational agents, rather than individual applications

ERP and CRM become transactional backbones and data providers. The real differentiation – and day-to-day productivity – lives in the orchestration layer.

This opens up flexibility and speed, but it also adds a powerful new layer that must be governed and paid for.


Option 3 – The Agentic Enterprise (Beyond ERP & CRM as Products)

In the third future, ERP and CRM stop being “big systems you buy” and become behaviours of your architecture.

  • Core business facts (orders, inventory, contracts, customer interactions) live in event streams, ledgers and shared data platforms, not only inside monolithic applications
  • Agents and policy engines handle much of the business logic and user interaction
  • Composable services provide domain capabilities – pricing, risk, subscriptions, entitlements – which agents combine to run processes

In this world, your data and event platforms are as central to running the business as any traditional application suite. ERP and CRM don’t disappear as concepts, but they are no longer the obvious centre of gravity.

Very few organisations are here end-to-end today, but many are already making decisions that either keep this option open – or quietly close it off.


Who is shaping these futures?

Once you have the three options in mind, it’s easier to see how the main players line up.

1. The suite giants – anchoring Option 1

The large business application vendors are doubling down on AI-augmented ERP and CRM – their suites for finance, operations, HR, sales and service:

  • SAP – core finance and supply chain suite, plus customer experience applications
  • Microsoft – Dynamics 365 for finance, operations, sales and customer service
  • Salesforce – cloud platform for sales, service and marketing
  • Oracle – cloud applications for finance, operations, HR and customer experience
  • Workday – integrated platform for HR and finance
  • ServiceNow – backbone for IT, employee & customer service in many organisations

Their common play:

  • Modernise their suites
  • Embed copilots and domain agents
  • Extend their own low-code and workflow tools

Goal: keep the system of record and main process engine in their platform, and make it smarter.


2. Agentic & workflow fronts – powering Option 2

A second cluster focuses on becoming your system of work – the main place where employees and agents operate.

Suite-centric fronts:

  • Microsoft: Power Platform and Copilot as the agentic layer across Dynamics and Microsoft 365
  • Salesforce: Agentforce and Slack as the agentic front for CRM and analytics
  • SAP: Joule and SAP Build/BTP to orchestrate across S/4HANA and line-of-business apps
  • Workday: emerging agent frameworks on its unified data model
  • ServiceNow: Now Platform with AI Agents and workflows across IT, employee and customer service

Vendor-neutral fronts:

  • Pega, Appian, OutSystems, Mendix – workflow and low-code platforms used to model and run journeys that cut across multiple systems
  • UiPath, Automation Anywhere – automation and “agentic” platforms that orchestrate work across ERP, CRM and legacy
  • Celonis and other process-intelligence tools – providing the process “map” and telemetry layer that agents need

All of them are, in different ways, working to own the agentic front end over a mixed application estate.


3. Cloud & data platforms – foundations for Options 2 and 3

Cloud and data platforms are the quiet foundation for the second and third futures:

  • Hyperscalers: AWS, Microsoft Azure, Google Cloud – providing compute, managed models, agent frameworks (e.g. Amazon Q/Bedrock, Azure OpenAI/Fabric, Google Vertex)
  • Data platforms: Snowflake, Databricks, and cloud-native warehouses and lakehouses

Increasingly, these platforms hold the shared operational truth: the consolidated view of customers, products, transactions and events that both applications and agents rely on.

Many organisations are already investing heavily here. The strategic question is whether these platforms remain analytics add-ons, or become part of your core system-of-record and execution layer.


4. AI-native and event-sourced challengers – the Option 3 edge

A final group rethinks ERP-like capabilities from scratch:

  • Rillet, ContextERP and other AI-native or event-sourced ERPs
  • Vertical or regional challengers that are event-driven, API-first and agent-friendly

Today they mostly play in mid-market segments or specific industries, but architecturally they look closest to the Option 3 end-state.


What the options mean when you start from legacy

Most organisations don’t choose between these options on a clean sheet. They start from multiple ERPs, several CRMs, custom code and fragmented data.

So what does it mean to lean into each path?

Leaning into Option 1 – modernise & augment the core

You are committing to:

  • Selecting strategic ERP/CRM suites and running classic, multi-year core transformations
  • Using the move to modern platforms to simplify processes and master data, not just lift-and-shift
  • Turning on embedded AI features where they are safe and valuable

Technology leaders clear technical debt and consolidate control. Finance leaders get large but relatively predictable investments with a familiar licence profile. Business leaders gain stability and better data, but day-to-day work may feel similar – just on a newer system.

The risk: over-indexing on the core and delaying cross-silo improvements.


Leaning into Option 2 – build an agentic layer on top

You are choosing to:

  • Make one or two workflow / agent / low-code platforms your main improvement engine
  • Redesign end-to-end journeys that span multiple systems
  • Put agents and orchestrated workspaces in front of employees, and increasingly, customers

Done well, this can deliver visible progress in 12–24 months without waiting for every core system to be replaced.

But it also changes your cost and control model:

  • You may reduce some “power user” licences in ERP/CRM
  • You increase consumption spend on orchestration platforms, data platforms and AI inference

It is not automatically cheaper. It is a reallocation of spend from application licences to data, AI and orchestration – and it must be managed that way.


Steering towards Option 3 – design for an agentic, data-centric future

Very few organisations will jump straight to Option 3, but you can lean in that direction when you invest:

  • Build new capabilities (for example, subscription management, partner platforms, pricing engines) as services on top of shared data and events, not as deep customisations inside ERP
  • Let more business logic live in agents and policy layers that call into applications, rather than being fully hard-coded in those applications
  • Treat your data platform as part of the operational nervous system, not just the reporting layer

This demands stronger engineering and architecture capabilities and a board that understands it is a long-term platform strategy, not a one-off project.


No-regret moves for the next 24 months

Whatever balance you choose between the three futures, some steps are almost always sensible.

1. Stabilise and simplify the core

  • Retire the most fragile legacy systems
  • Reduce bespoke code where it doesn’t create differentiation
  • Use any ERP/CRM upgrade to simplify processes and data, not just modernise technology

2. Pick your strategic orchestration and agent platforms

  • Decide whether your main system of work will be suite-centric or vendor-neutral
  • Avoid ending up with multiple, overlapping agentic layers because different teams picked their own favourites

3. Use process intelligence as the map for agents

You should not unleash agents on processes you don’t understand.

  • Use process mining and process intelligence (for example, Celonis, Signavio and similar tools) to discover how key flows actually run and where the real bottlenecks and risks are
  • Treat this as the map and telemetry system for your agent strategy: it tells you where to start, and whether changes are helping or hurting

4. Start with bounded agent use cases and clear governance

  • Begin where agents prepare work for humans or act within tight financial and policy limits
  • Put in place shared governance for agents: which systems they can touch, what actions they can take automatically, and how you monitor them

ERP, CRM and the long game of AI

ERP and CRM are not going away. But they are no longer the only, or even the obvious, centre of gravity.

Over the next decade, three design choices will matter more than any feature list:

  • Where your core operational data and system-of-record live – primarily in suites, primarily in shared data platforms, or a deliberate mix
  • Where your business logic runs – inside applications, in an agentic layer, or in composable services
  • Where your money flows – mostly into licences and implementation, or increasingly into cloud data and AI consumption

The real risk is not picking the “wrong” vendor.
It is drifting into an AI and agent future that recreates today’s complexity and cost in a new shape.

The organisations that pull ahead will be the ones whose executive teams treat this as a shared design decision, not just an IT refresh – and consciously decide how far they want to travel from Option 1, through Option 2, towards Option 3.

Where AI Is Creating the Most Value (Q4 2025)

There’s still a value gap—but leaders are breaking away. In the latest BCG work, top performers report around five times more revenue uplift and three times deeper cost reduction from AI than peers. The common thread: they don’t bolt AI onto old processes—they rewire the work. As BCG frames it, the 10-20-70 rule applies: roughly 10% technology, 20% data and models, and 70% process and organizational change. That’s where most of the value is released.

This article is for leaders deciding where to place AI bets in 2025. If you’re past “should we do AI?” and into “where do we make real money?”, this is your map.


Where the money is (cross-industry)

1) Service operations: cost and speed
AI handles simple, repeatable requests end-to-end and coaches human agents on the rest. The effect: shorter response times, fewer repeat contacts, and more consistent outcomes—without sacrificing customer experience.

2) Supply chain: forecast → plan → move
The gains show up in fewer stockouts, tighter inventories, and faster cycle times. Think demand forecasting, production planning, and dynamic routing that reacts to real-world conditions.

3) Software and engineering: throughput
Developer copilots and automated testing increase release velocity and reduce rework. You ship improvements more often, with fewer defects, and free scarce engineering time for higher-value problems.

4) HR and talent: faster funnels and better onboarding/learning
Screening, scheduling, and candidate communication are compressed from days to hours. Internal assistants support learning and workforce planning. The results: shorter time-to-hire and better conversion through each stage.

5) Marketing and sales: growing revenue
Personalization, next-best-action, and on-the-fly content creation consistently drive incremental sales. This is the most frequently reported area for measurable revenue lift.

Leadership advice: Pick 2-3 high-volume processes (one cost, one revenue). Redesign the workflow, not just add AI on top. Set hard metrics (cost per contact, cycle time, revenue per visit) and a 90-day checkpoint. Industrialize what works; kill what doesn’t.


Sector spotlights

Consumer industries (Retail & Consumer Packaged Goods)

Marketing and sales.

  • Personalized recommendations increase conversion and basket size; retail media programs are showing verified incremental sales.
  • AI-generated marketing content reduces production costs and speeds creative iteration across markets and channels. Mondelez reported 30-50% reduction in marketing content production costs using generative AI at scale.
  • Campaign analytics that used to take days are produced automatically, so teams run more “good bets” each quarter.

Supply chain.

  • Demand forecasting sharpens purchasing and reduces waste.
  • Production planning cuts changeovers and work-in-progress.
  • Route optimization lowers distance traveled and fuel, improving on-time delivery.

Customer service.

  • AI agents now resolve a growing share of contacts end-to-end. Ikea AI agents now handle already 47% of all request so service people can offer more support on the other questions.
  • Agent assist gives human colleagues instant context and suggested next steps.
    The result is more issues solved on first contact, shorter wait times, and maintained satisfaction, provided clear hand-offs to humans exist for complex cases.

What to copy: Start with one flagship process in each of the three areas above; set a 90-day target; only then roll it across brands and markets with a standard playbook.


Manufacturing (non-pharma)

Predictive maintenance.
When tied into scheduling and spare-parts planning, predictive maintenance reduces unexpected stoppages and maintenance costs—foundational for higher overall equipment effectiveness (spelled out intentionally).

Computer-vision quality control.
In-line visual inspection detects defects early, cutting scrap, rework, and warranty exposure. Value compounds as models learn across lines and plants.

Production scheduling.
AI continuously rebalances schedules for constraints, changeovers, and demand shifts—more throughput with fewer bottlenecks. Automotive and electronics manufacturers report 5-15% throughput gains when AI-driven scheduling handles real-time constraints.

Move to scale: Standardize data capture on the line, run one “AI plant playbook” to convergence, then replicate. Treat models as line assets with clear ownership, service levels, and a retraining cadence.


Pharmaceuticals

R&D knowledge work.
AI accelerates three high-friction areas: (1) large evidence reviews, (2) drafting protocols and clinical study reports, and (3) assembling regulatory summaries. You remove weeks from critical paths and redirect scientists to higher-value analysis.

Manufacturing and quality.
Assistants streamline batch record reviews, deviation write-ups, and quality reports. You shorten release cycles and reduce delays. Govern carefully under Good Manufacturing Practice, with humans approving final outputs.

Practical tip: Stand up an “AI for documents” capability (standardized templates, automated redaction, citation checking, audit trails) before you touch lab workflows. It pays back quickly, proves your governance model, and reduces compliance risk when you expand to higher-stakes processes.


Healthcare providers

Augment the professional; automate the routine. Radiology, pathology, and frontline clinicians benefit from AI that drafts first-pass reports, triages cases, and pre-populates documentation. Northwestern Medicine studies show approximately 15.5% average productivity gains (up to 40% in specific workflows) in radiology report completion without accuracy loss. Well-designed oversight maintains quality while reducing burnout.

Non-negotiable guardrail: Clear escalation rules for edge cases and full traceability. If a tool can’t show how it arrived at a suggestion, it shouldn’t touch a clinical decision. Establish explicit human review protocols for any AI-generated clinical content before it reaches patients or medical records.


Financial services

Banking.

  • Service and back-office work: assistants summarize documents, draft responses, and reconcile data. JPMorgan reports approximately 30% fewer servicing calls per account in targeted Consumer and Community Banking segments and 15% lower processing costs in specific workflows.
  • Risk and compliance: earlier risk flags, smarter anti-money-laundering reviews, and cleaner audit trails reduce losses and manual rework.

Insurance.

  • Claims: straight-through processing for simple claims moves from days to hours.
  • Underwriting: AI assembles files and surfaces risk signals so underwriters focus on complex judgment.
  • Back office: finance, procurement, and HR automations deliver steady, compounding savings.

Leadership note: Treat service assistants and claims bots as products with roadmaps and release notes—not projects. That discipline keeps quality high as coverage expands.


Professional services (legal, consulting, accounting)

Document-heavy work is being rebuilt: contract and filing review, research synthesis, proposal generation. Well-scoped processes often see 40–60% time savings. . Major law firms report contract review cycles compressed from 8-12 hours to 2-3 hours for standard agreements, with associates redirected to judgment-heavy analysis and client advisory work.

Play to win: Build a governed retrieval layer over prior matters, proposals, and playbooks—your firm’s institutional memory—then give every practitioner an assistant that can reason over it.


Energy and utilities

Grid and renewables.
AI improves demand and renewable forecasting and helps balance the grid in real time. Autonomous inspections (drones plus computer vision) speed asset checks by 60-70% and reduce hazards. Predictive maintenance on critical infrastructure prevents outages—utilities report 20-30% reduction in unplanned downtime when AI is tied into work order systems and cuts truck rolls (field service visits).

How to scale: Start with one corridor or substation, prove inspection cycle time and fault detection, then expand with a standard data schema so models learn from every site.


Next Steps (practical and measurable)

1) Choose three processes—one for cost, one for revenue, one enabler.
Examples:

  • Cost: customer service automation, predictive maintenance, the month-end finance close.
  • Revenue: personalized offers, “next-best-action” in sales, improved online merchandising.
  • Enabler: developer assistants for code and tests, HR screening and scheduling.
    Write a one-line success metric and a quarterly target for each (e.g., “reduce average response time by 30%,” “increase conversion by 2 points,” “ship weekly instead of bi-weekly”).

2) Redesign the work, not just the process map.
Decide explicitly: what moves to the machine, what stays with people, where the hand-off happens, and what the quality gate is. Train for it. Incentivize it.

3) Industrialize fast.
Stand up a small platform team for identity, data access, monitoring, and policy. Establish lightweight model governance. Create a change backbone (playbooks, enablement, internal communications) so each new team ramps faster than the last.

4) Publish a value dashboard.
Measure cash, not demos: cost per contact, cycle time, on-shelf availability, release frequency, time-to-hire, revenue per visit. Baseline these metrics before launch—most teams skip this step and cannot prove impact six months later when challenged. Review monthly. Retire anything that doesn’t move the number.

5) Keep humans in the loop where it matters.
Customer experience, safety, financial risk, and regulatory exposure all require clear human decision points. Automate confidently—but design escalation paths from day one.


Final word

In 2025, AI pays where volume is high and rules are clear (service, supply chain, HR, engineering), and where personalization drives spend (marketing and sales). The winners aren’t “using AI.” They are re-staging how the work happens—and they can prove it on the P&L.

How AI is Reshaping Human Work, Teams, and Organisational Design

The implications of AI are profound: when individuals can deliver team-level output with AI, organisations must rethink not just productivity, but the very design of work and teams. A recent Harvard Business School and Wharton field experiment titled The Cybernetic Teammate offers one of the clearest demonstrations of this shift. Conducted with 776 professionals at Procter & Gamble, the study compared individuals and teams working on real product-innovation challenges, both with and without access to generative AI.

The results were striking:

  • Individuals using AI performed as well as/better than human teams without AI.
  • Teams using AI performed best of all.
  • AI also balanced out disciplinary biases—commercial and technical professionals produced more integrated, higher-quality outputs when assisted by AI.

In short, AI amplified human capability at both the individual and collective level. It became a multiplier of creativity, insight, and balance—reshaping the traditional boundaries of teamwork and expertise.

The Evidence Is Converging

Other large-scale studies reinforce this picture. A Harvard–BCG experiment showed consultants using GPT-4 were 12% more productive, 25% faster, and delivered work rated 40% higher in quality for tasks within the model’s “competence frontier


How Work Will Be Done Differently

These findings signal a fundamental redesign in how work is organised. The dominant model—teams collaborating to produce output—is evolving toward individual-with-AI first, followed by team integration and validation.

A typical workflow may now look like this:

AI-assisted ideation → human synthesis → AI refinement → human decision.

Work becomes more iterative, asynchronous, and cognitively distributed. Human collaboration increasingly occurs through the medium of AI: teams co-create ideas, share prompt libraries, and build upon each other’s AI-generated drafts.

The BCG study introduces a useful distinction:

  • Inside the AI frontier: tasks within the model’s competence—ideation, synthesis, summarisation—where AI can take the lead.
  • Outside the AI frontier: tasks requiring novel reasoning, complex judgment, or proprietary context—where human expertise must anchor the process.

Future roles will be defined less by function and more by how individuals navigate that frontier: knowing when to rely on AI and when to override it. Skills like critical reasoning, verification, and synthesis will matter more than rote expertise.


Implications for Large Enterprises

For established organisations, the shift toward AI-augmented work changes the anatomy of structure, leadership, and learning.

1. Flatter, more empowered structures.
AI copilots widen managerial spans by automating coordination and reporting. However, they also increase the need for judgmental oversight—requiring managers who coach, review, and integrate rather than micromanage.

2. Redefined middle-management roles.
The traditional coordinator role gives way to integrator and quality gatekeeper. Managers become stewards of method and culture rather than traffic controllers.

3. Governance at the “AI frontier.”
Leaders must define clear rules of engagement: what tasks can be automated, which require human review, and what data or models are approved. This “model–method–human” control system ensures both productivity and trust.

4. A new learning agenda.
Reskilling moves from technical training to cognitive fluency: prompting, evaluating, interpreting, and combining AI insights with business judgment. The AI-literate professional becomes the new organisational backbone.

5. Quality and performance metrics evolve.
Volume and throughput give way to quality, cycle time, rework reduction, and bias detection—metrics aligned with the new blend of human and machine contribution.

In short, AI doesn’t remove management—it redefines it around sense-making, coaching, and cultural cohesion.


Implications for Startups and Scale-Ups

While enterprises grapple with governance and reskilling, startups are already operating in an AI-native way.

Evidence from recent natural experiments shows that AI-enabled startups raise funding faster and with leaner teams. The cost of experimentation drops, enabling more rapid iteration but also more intense competition.

The typical AI-native startup now runs with a small human core and an AI-agent ecosystem handling customer support, QA, and documentation. The operating model is flatter, more fluid, and relentlessly data-driven.

Yet the advantage is not automatic. As entry barriers fall, differentiation depends on execution, brand, and customer intimacy. Startups that harness AI for learning loops—testing, improving, and scaling through real-time feedback—will dominate the next wave of digital industries.


Leadership Imperatives – Building AI-Enabled Work Systems

For leaders, the challenge is no longer whether to use AI, but how to design work and culture around it. Five imperatives stand out:

  1. Redesign workflows, not just add tools. Map where AI fits within existing processes and where human oversight is non-negotiable.
  2. Build the complements. Create shared prompt libraries, custom GPTs,  structured review protocols, and access to verified data.
  3. Run controlled pilots. Test AI augmentation in defined workstreams, measure speed, quality, and engagement, and scale what works.
  4. Empower and educate. Treat AI literacy as a strategic skill—every employee a prompt engineer, every manager a sense-maker.
  5. Lead the culture shift. Encourage experimentation, transparency, and open dialogue about human-machine collaboration.

Closing Thought

AI will not replace humans or teams. But it will transform how humans and teams create value together.

The future belongs to organisations that treat AI not as an external technology, but as an integral part of their work design and learning system. The next generation of high-performing enterprises—large and small—will be those that master this new choreography between human judgment and machine capability.

AI won’t replace teams—but teams that know how to work with AI will outperform those that don’t.

More on this in one of my next newsletters.

Consultancy, Rewired: AI’s Impact on consultancy firms and what their clients should expect

The bottom line: consulting is not going away. It is changing—fast. AI removes a lot of manual work and shifts the focus to speed, reusable tools, and results that can be measured. This has consequences for how firms are organised and how clients buy and use consulting.


What HBR says

The main message: AI is reshaping the structure of consulting firms. Tasks that used to occupy many junior people—research, analysis, and first-pass modelling—are now largely automated. Teams get smaller and more focused. Think of a move from a wide pyramid to a slimmer column.

New human roles matter more: people who frame the problem, translate AI insights into decisions, and work with executives to make change happen. HBR also points to a new wave of AI-native boutiques. These firms start lean, build reusable assets, and aim for outcomes rather than volume of slides.

What The Economist says

The emphasis here is on client expectations and firm economics. Clients want proof of impact, not page counts. If AI can automate a lot of the production work, large firms must show where they still create unique value. That means clearer strategies, simpler delivery models, and pricing that links fees to outcomes.

The coverage also suggests this is a structural shift, not a short-term cycle. Big brands will need to combine their access and experience with technology, reusable assets, and strong governance to stay ahead.


What AI can do in consulting — now vs. next (practical view)

Now

  • Discovery & synthesis. AI can sweep through filings, research, transcripts, and internal knowledge bases to cluster themes, extract evidence with citations, and surface red flags. This compresses the preparation phase of understanding so teams spend time on framing the problem and implications.
  • First-pass quantification & modelling. It produces draft market models and sensitivity analyses that consultants then stress-test. The benefit isn’t perfect numbers; it’s cycle-time—from question to a defendable starting point—in hours, not days.
  • Deliverables at speed. From storylines to slide drafts and exhibits, AI enforces structure and house style, handles versioning, and catches inconsistencies. Human effort shifts to message clarity, executive alignment, and implications for decision makers.
  • Program operations & governance. Agents can maintain risk and issue logs, summarize meetings, chase actions, and prepare steering packs. Leaders can use meeting time for choices, not status updates.
  • Knowledge retrieval & reuse. Firm copilots bring up relevant cases, benchmarks, and experts. Reuse becomes normal, improving speed and consistency across engagements.

Next (12–24 months)

  • Agentic due diligence. Multi-agent pipelines will triage vast data sets (news, filings, call transcripts), propose claims with evidence trails, and flag anomalies for partner review—compressing weeks to days while keeping human judgment in the loop.
  • Scenario studios and digital twins. Reusable models (pricing, supply, workforce) will let executives explore “what-ifs” choices live, improving decision speed and buy-in.
  • Operate / managed AI. Advisory will bundle with run-time AI services (build-run-transfer), priced on SLAs or outcome measuress, linking fees to performance after go-live.
  • Scaled change support. Chat-based enablement and role-tailored nudges will help people adopt new behaviors at scale; consultants curate and calibrate content and finetune interventions instead of running endless classroom sessions.

Reality check: enterprise data quality, integration, and model-risk constraints keep humans firmly in the loop. The best designs make this explicit with approvals, audit trails, and guardrails.


Five industry scenarios (2025–2030)

  1. AI-Accelerated Classic. The big firms keep CXO access but run leaner teams; economics rely on IP based assets and pricing shifts from hours to outcomes.
  2. Hourglass Market Strong positions at the top (large integrators) and at the bottom (specialist boutiques). The middle gets squeezed as clients self-serve standard analysis.
  3. Productised & Operate. Advice comes with data, models, and managed services. Contracts include service levels and shared-savings, tying value to real-world results.
  4. Client-First Platforms. Companies build internal AI studios and bring in targeted experts. Firms must plug into client platforms and compete on speed, trust, and distinctive assets.
  5. AI-Native Agencies Rise. New entrants born with automation-first workflows and thin layers scale quickly—resetting expectations of speed, price-performance, and what a “team” looks like.

What clients should ask for (and firms should offer)

  • Ask for assets, not documents. Ask for reusable data, models, and playbooks that you keep using after the engagement. —and specify this in the SOW.
  • Insist on transparency. Demand visibility into data sources, prompt chains, evaluation methods, and guardrails so you can trust, govern, and scale what’s built.
  • Design for capability transfer. Make enablement, documentation, and handover part of the scope with clear acceptance criteria.
  • Outcome-linked pricing where possible. Start with a pilot and clear success metrics; scale with contracts tied to results or service levels.

Close

AI is changing both the shape of consulting firms and the way organisations use them. Smaller teams, reusable assets, and outcome focus will define the winners.

From Org Charts to Work Charts – Designing for Hybrid Human–Agent Organisations

The org chart is no longer the blueprint for how value gets created. As Microsoft’s Asha Sharma puts it, “the org chart needs to become the work chart.” When AI agents begin to own real slices of execution—preparing customer interactions, triaging tickets, validating invoices—structure must follow the flow of work, not the hierarchy of titles. This newsletter lays out what that means for leaders and how to move, decisively, from boxes to flows.


Why this is relevant now

Agents are leaving the lab. The conversation has shifted from “pilot a chatbot” to “re-architect how we deliver outcomes.” Boards and executive teams are pushing beyond experiments toward embedded agents in sales, service, finance, and supply chain. This is not a tooling implementation—it’s an operating-model change.

Hierarchy is flattening. When routine coordination and status reporting are automated, you need fewer layers to move information and make decisions. Roles compress; accountabilities become clearer; cycle times shrink. The management burden doesn’t disappear—it changes. Leaders spend less time collecting updates and more time setting direction, coaching, and owning outcomes.

Enterprises scale. AI-native “tiny teams” design around flows—the sequence of steps that create value—rather than traditional functions. Large organizations shouldn’t copy their size; they should copy this unit of design. Work Charts make each flow explicit, assign human and agent owners, and let you govern and scale it across the enterprise.


What is a Work Chart?

A Work Chart is a living map of how value is produced—linking outcomes → end-to-end flows → tasks → handoffs—and explicitly assigning human owners and agent operators at each step. Where an org chart shows who reports to whom, a Work Chart shows:

  • Where the work happens – the flow and its stages
  • Who is accountable – named human owners of record
  • What is automated – agents with charters and boundaries
  • Which systems/data/policies apply – the plumbing and guardrails
  • How performance is measured – SLAs, exceptions, error/rework, latency

Work Chart is your work graph made explicit—connecting goals, people, and permissions so agents can act with context and leaders can govern outcomes.


Transformation at every level

Board / Executive Committee
Set policy for non-human resources (NHRs) just as you do for capital and people. Define decision rights, guardrails, and budgets (compute/tokens). Require blended KPIs—speed, cost, risk, quality—reported for human–agent flows, not just departments. Make Work Charts a standing artifact in performance reviews.

Enterprise / Portfolio
Shift from function-first projects to capability platforms (retrieval, orchestration, evaluation, observability) that any BU can consume. Keep a registry of approved agents and a flow inventory so portfolio decisions always show which flows, agents, and data they affect. Treat major flow changes like product releases—versioned, reversible, and measured.

Business Units / Functions
Turn priority processes into agent-backed services with clear SLAs and a named human owner. Publish inputs/outputs, boundaries (what the agent may and may not do), and escalation paths. You are not “installing AI”; you’re standing up services that can be governed and improved.

Teams
Maintain an Agent Roster (purpose, inputs, outputs, boundaries, logs). Fold Work Chart updates into existing rituals (standups, QBRs). Managers spend less time on status and more on coaching, exception handling, and continuous improvement of the flow.

Individuals
Define personal work charts for each role (the 5–7 recurring flows they own) and the agents they orchestrate. Expect role drift toward judgment, relationships, and stewardship of AI outcomes.


Design principles – what “good” looks like

  1. Outcome-first. Start from customer journeys and Objective – Key Results (OKRs); redesign flows to meet them.
  2. Agents as first-class actors. Every agent has a charter, a named owner, explicit boundaries, and observability from day one.
  3. Graph your work. Connect people, permissions, and policies so agents operate with context and least-privilege access.
  4. Version the flow. Treat flow changes like product releases—documented, tested, reversible, and measured.
  5. Measure continuously. Track time-to-outcome, error/rework, exception rates, and SLA adherence—reviewed where leadership already looks (business reviews, portfolio forums).

Implementation tips

1) Draw the Work Chart for mission-critical journeys
Pick one customer journey, one financial core process, and one internal productivity flow. Map outcome → stages → tasks → handoffs. Mark where agents operate and where humans remain owners of record. This becomes the executive “single source” for how the work actually gets done.

2) Create a Work Chart Registry
Create a lightweight, searchable registry that lists every flow, human owner, agent(s), SLA, source, and data/permission scope. Keep it in the systems people already use (e.g., your collaboration hub) so it becomes a living reference, not a slide deck.

3) Codify the Agent Charters
For each agent on the Work Chart, publish a one-pager: Name, Purpose, Inputs, Outputs, Boundaries, Owner, Escalation Path, Log Location. Version control these alongside the Work Chart so changes are transparent and auditable.

4) Measure where the work happens.
Instrument every node with flow health metrics—latency, error rate, rework, exception volume. Surface them in the tools leaders already use (BI dashboards, exec scorecards). The goal is to manage by flow performance, not anecdotes.

5) Shift budgeting from headcount to flows
Attach compute/SLA budgets to the flows in your Work Chart. Review them at portfolio cadence. Fund increases when there’s demonstrable improvement in speed, quality, or risk. This aligns investment with value creation rather than with org boxes.

6) Communicate the new social contract
Use the Work Chart in town halls and leader roundtables to explain what’s changing, why it matters, and how roles evolve. Show before/after charts for one flow to make the change tangible. Invite feedback; capture exceptions; iterate.


Stop reorganizing boxes – start redesigning flows. Mandate that each executive publishes the first Work Chart for one mission-critical journey—complete with agent charters, SLAs, measurements, and named owners of record. Review it with the same rigor you apply to budget and risk. Organizations that do this won’t just “adopt AI”; they’ll build a living structure that mirrors how value is created—and compounds it.

Closing the Digital Competency Gap in the Boardroom

This article is based on a thesis I have written for the Supervisory Board program (NCC 73) at Nyenrode University, which I will complete this month. I set out to answer a practical question: how can supervisory boards close the digital competency gap so their oversight of digitalization and AI is effective and value-creating?

The research combined literature, practitioner insights, and my own experience leading large-scale digital transformations. The signal is clear: technology, data, and AI are no longer specialist topics—they shape strategy, execution, and resilience. Boards that upgrade their competence change the quality of oversight, the shape of investment, and ultimately the future of the company.


1) Business model transformation

Digital doesn’t just add channels; it rewrites how value is created and captured. The board’s role is to probe how data, platforms, and AI may alter customer problem–solution fit, value generation logic, and ecosystem position over the next 3–5–10 years. Ask management to make the trade-offs explicit: which parts of the current model should we defend, which should we cannibalize, and which new options (platform plays, data partnerships, embedded services) warrant small “option bets” now?

What to look out for: strategies that talk about “going digital” without quantifying how revenue mix, margins, or cash generation will change. Beware dependency risks (platforms, app stores, hyperscalers) that shift bargaining power over time. Leverage scenario planning and clear leading indicators—so the board can see whether the plan is working early enough to pivot or double down.

2) Operational digital transformation

The strongest programs are anchored in outcomes, not output. Boards should ask to see business results expressed in P&L and balance-sheet terms (growth, cost, capital turns), not just “go-live” milestones. Require a credible pathway from pilot to scale: gated tranches that release funding when adoption, value, and risk thresholds are met; and clear “stop/reshape” criteria to avoid sunk-costs.

What to look out for: “watermelon” reporting— that stay green while progress/adoption is behind; vendor-led roadmaps that don’t fit the architecture; and under-resourced change management. As a rule of thumb, ensure 10–15% of major transformation budgets are reserved for change, communications, and training. Ask who owns adoption metrics and how you’ll know—early—that teams are using what’s been built.

3) Organization & culture

Technology succeeds at the speed of behaviour change. The board should examine whether leadership is telling a coherent story (why/what/how/who) and whether middle management has the capacity to translate it into local action. Probe how AI will reshape roles and capabilities, and whether the company has a reskilling plan that is targeted, measurable, and linked to workforce planning.

What to look out for: assuming tools will “sell themselves,” starving change budgets, and running transformations in a shadow lane disconnected from the real business. Look for feedback loops—engagement diagnostics, learning dashboards, peer-to-peer communities—that surface resistance early and help leadership course-correct before adoption stalls.

4) Technology investments

Oversight improves dramatically when the board insists on a North Star architecture that makes trade-offs visible: which data foundations come first, how integration will work, and how security/privacy are designed in. Investments should be staged, with each tranche linked to outcome evidence and risk mitigation, and with conscious decisions about vendor lock-in and exit options.

What to look out for: shiny-tool syndrome, financial engineering that ignore lifetime Total Cost of Ownership (TCO), and weak vendor due diligence. Ask for risk analysis (e.g., cloud and vendor exposure) and continuity plans that are actually tested. Expect architecture reviews by independent experts on mission-critical choices, so the board gets a clear view beyond vendor narratives.

5) Security & compliance

Cyber, privacy, and emerging AI regulation must be treated as enterprise-level risks with clear ownership, KPIs, and tested recovery playbooks. Boards should expect regular exercises and evidence that GDPR, NIS2, and AI governance are embedded in product and process design—not bolted on at the end.

What to look out for: “tick-the-box” compliance that produces documents rather than resilience, infrequent or purely theoretical drills, and untested backups. Probe third-party and supply-chain exposure as seriously as internal controls. The standard is not perfection; it’s informed preparedness, repeated practice, and to learn from near-misses.


Seven structural moves that work

  1. Make digital explicit in board profiles. Use a competency matrix that distinguishes business-model, data/AI, technology, and cyber/compliance fluency. Recruit to close gaps or appoint external advisors—don’t hide digital under a generic “technology” label.
  2. Run periodic board maturity assessments. Combine self-assessment with executive feedback to identify capability gaps. Tie development plans to the board calendar (e.g., pre-strategy masterclasses, deep-dives before major investments).
  3. Hard-wire digital/AI into the agenda. Move from ad-hoc updates to a cadence: strategy and scenario sessions, risk and resilience reviews, and portfolio health checks. Make room for bad news early so issues surface before they become expensive.
  4. Adopt a board-level Digital & IT Cockpit. Track six things concisely: run-the-business efficiency, risk posture, innovation enablement, strategy alignment, value creation, and future-proofing (change control, talent, and architecture). Keep trends visible across quarters.
  5. Establish a Digital | AI Committee (where applicable). This complements—not replaces—the Audit Committee. Mandate: opportunities and threats, ethics and risk, investment discipline, and capability building. The committee prepares the ground; the full board takes the decisions.
  6. Use independent expertise by default on critical choices. Commission targeted reviews (architecture, vendor due diligence, cyber resilience) to challenge internal narratives. Independence is not a luxury; it’s how you avoid groupthink and discover blind spots in time.
  7. Onboard and upskill continuously. Provide a digital/AI onboarding for new members; schedule briefings with external experts; and use site visits to see real adoption. Treat learning like risk management: systematic, scheduled, and recorded.

Do you need a separate “Digital Board”?

My reflection: competence helps, but time and attention are the true scarcities. In digitally intensive businesses—where data platforms, AI-enabled operations, and cyber exposure shape enterprise value and are moving fast—a separate advisory or oversight body can deepen challenge and accelerate learning. It creates space for structured debate on architecture, ecosystems, and regulation without crowding out other board duties.

This isn’t a universal prescription. In companies where digital is material but not defining, strengthening the main board with a committee and better rhythms is usually sufficient. But when the operating model’s future rests on technology bets, a dedicated Digital Board (or equivalent advisory council) can bring the needed altitude, continuity, and specialized challenge to help the supervisory board make better, faster calls.


What this means for your next board cycle

The practical message from the thesis is straightforward: digital oversight is a core board responsibility that can be institutionalised. Start by clarifying the capability you need (the competency matrix), then hard-wire the conversation into the board’s rhythms (the agenda and cockpit), and raise the quality of decisions (staged investments, independent challenge, real adoption metrics). Expect a culture shift: from project status to value realization, from tool choice to architecture, from compliance as paperwork to resilience as practice.

Most importantly, treat this as a journey. Boards that improve a little each quarter—on fluency, on the sharpness of their questions, on the discipline of their investment decisions—create compounding advantages. The gap closes not with a single appointment or workshop, but with deliberate governance that learns, adapts, and holds itself to the same standard it asks of management.