Lifelong Learning in the Age of AI – My Playbook

September 2025, I received two diplomas: IMD’s AI Strategy & Implementation and Nyenrode University’s Corporate Governance for Supervisory Boards. I am proud of both—more importantly, they cap off a period where I have deliberately rebuilt how I learn.

With AI accelerating change and putting top-tier knowledge at everyone’s fingertips, the edge goes to leaders who learn—and apply—faster than the market moves. In this issue I am not writing theory; I am sharing my learning journey of the past six months—what I did, what worked, and the routine I will keep using. If you are a leader, I hope this helps you design a learning system that fits a busy executive life.


My Learning System – 3 pillars

1) Structured learning

This helped me to gain the required depth:

  • IMD — AI Strategy & Implementation. I connected strategy to execution: where AI creates value across the business, and how to move from pilots to scaled outcomes. In upcoming newsletters, I will go share insights on specific topics we went deep on in this course.
  • Nyenrode — Corporate Governance for Supervisory Boards. I deepened my view on board-level oversight—roles and duties, risk/compliance, performance monitoring, and strategic oversight. I authored my final paper on how to close the digital gap in supervisory boards (see also my earlier article)
  • Google/Kaggle’s 5-day Generative AI Intensive. Hands-on labs demystified how large language models work: what is under the hood, why prompt quality matters, where workflows can break, and how to evaluate outputs against business goals. It gave understanding how to improve the use of these models.

2) Curated sources

This extended the breadth of my understanding of the use of AI.

2a. Books

Below I give a few examples, more book summaries/review, you can find on my website: www.bestofdigitaltransformation.com/digital-ai-insights.

  • Co-Intelligence: a pragmatic mindset for working with AI—experiment, reflect, iterate.
  • Human + Machine: how to redesign processes around human–AI teaming rather than bolt AI onto old workflows.
  • The AI-Savvy Leader: what executives need to know to steer outcomes without needing to code.

2b. Research & articles
I built a personal information base with research from: HBR, MIT, IMD, Gartner, plus selected pieces from McKinsey, BCG, Strategy&, Deloitte, and EY. This keeps me grounded in capability shifts, operating-model implications, and the evolving landscape.

2c. Podcasts & newsletters
Two that stuck: AI Daily Brief and Everyday AI. Short, practical audio overviews with companion newsletters so I can find and revisit sources. They give me a quick daily pulse without drowning in feeds.

3) AI as my tutor

I am using AI to get personalised learning support.

3a. Explain concepts. I use AI to clarify ideas, contrast approaches, and test solutions using examples from my context.
3b. Create learning plans. I ask for step-by-step learning journeys with milestones and practice tasks tailored to current projects.
3c. Drive my understanding. I use different models to create learning content, provide assignments, and quiz me on my understanding.


How my journey unfolded

Here is how it played out.

1) Started experimenting with ChatGPT.
I was not an early adopter; I joined when GPT-4 was already strong. Like many, I did not fully trust it at first. I began with simple questions and asked the model to show how it interpreted my prompts. That built confidence without creating risks/frustration.

2) Built foundations with books.
I read books like Co-Intelligence, Human + Machine, and The AI-Savvy Leader. These created a common understanding for where AI helps (and does not), how to pair humans and machines, and how to organise for impact. For all the books I created reviews, to anchor my learnings and share them in my website.

3) Added research and articles.
I set up a repository with research across HBR/MIT/IMD/Gartner and selected consulting research. This kept me anchored in evidence and applications, and helped me track the operational implications for strategy, data, and governance.

4) Tried additional models (Gemini and Claude).
Rather than picking a “winner,” I used them side by side on real tasks. The value was in contrast—seeing how different models frame the same question, then improving the final answer by combining perspectives. Letting models critique each other surfaced blind spots.

5) Went deep with Google + Kaggle.
The 5-day intensive course clarified what is under the hood: tokens/vectors, why prompts behave the way they do, where workflows tend to break, and how to evaluate outputs beyond “sounds plausible.” The exercises translated directly into better prompt design and started my understanding of how agents work.

6) Used NotebookLM for focused learning.
For my Nyenrode paper, I uploaded the key articles and interacted only with that corpus. NotebookLM generated grounded summaries, surfaced insights I might have missed, and reduced the risk of invented citations (by sticking to the uploaded resources). The auto-generated “podcast” is one of the coolest features I experienced and really helps to learn about the content.

7) Added daily podcasts/newsletters to stay current.
The news volume on AI is impossible to track end-to-end. AI Daily Brief and Everyday AI give me a quick scan each morning and links worth saving for later deep dives. This provides the difference between staying aware versus constantly feeling behind.

8) Learned new tools and patterns at IMD.

  • DeepSeek helped me debug complex requests by showing how the model with reasoning interpreted my prompt—a fantastic way to unravel complex problems.
  • Agentic models like Manus showed the next step: chaining actions and tools to complete tasks end-to-end.
  • CustomGPTs (within today’s LLMs) let me encode my context, tone, and recurring workflows, boosting consistency and speed across repeated tasks.

Bring it together with a realistic cadence.

Leaders do not need another to-do list; they need a routine that works. Here is the rhythm I am using now:

Daily

  • Skim one high-signal newsletter or listen to a podcast.
  • Capture questions to explore later.
  • Learn by doing with the various tools.

Weekly

  • Learn: read one or more papers/articles on various Ai related topics
  • Apply: use one idea on a live problem; interact with AI on going deeper
  • Share: create my weekly newsletter, based on my learnings

Monthly

  • Pick one learning topic read a number of primary sources, not just summaries.
  • Draft an experiment: with goal, scope, success metric, risks, and data needs. Using AI to pressure-test assumptions.
  • Review with a thought leaders/colleagues for challenge and alignment.

Quarterly

  • Read at least one book that expands your mental models.
  • Create a summary for my network. Teaching others cements my own understanding.

(Semi-) Annualy

  • Add a structured program or certificate to go deep and to benefit from peer debate.

Closing

The AI era compresses the shelf life of knowledge. Waiting for a single course is no longer enough. What works is a learning system: structured learning for depth, curated sources for breadth, and AI as your tutor for speed. That has been my last six months, and it is a routine I will continue.

Harnessing Curiosity for Digital Transformation Success

In a world shaped by accelerating change, new technologies, and shifting customer expectations, digital transformation is no longer optional—it’s a strategic imperative. But technology alone doesn’t drive transformation. The real differentiator lies in human capabilities—and among these, curiosity stands out as a key enabler of successful change.

Curiosity: The Human Advantage in a Digital World

Curiosity isn’t just about asking questions. It’s the active pursuit of new knowledge, perspectives, and possibilities. It fuels learning, drives innovation, and enables people to adapt quickly in fast-moving environments.

As Deloitte puts it in their research on digital fluency, “Curiosity is the catalyst that allows people to keep pace with technology—and lead with it.”

For transformation leaders, this has direct implications:

  • Curious individuals are more likely to experiment, learn, and improve.
  • Curious teams are better at breaking silos, seeking input, and iterating solutions.
  • Curious cultures are more resilient, adaptive, and open to what’s next.

Research That Connects Curiosity to Transformation Success

The Business Case for Curiosity – Harvard Business Review (Francesca Gino, 2018)

  • Curious employees are more engaged, collaborative, and better at decision-making.
  • Organizations that foster curiosity experience higher innovation and reduced groupthink.
  • Read the article →

The Mindsets of Transformation Leaders – McKinsey & Company

  • Highlights intellectual curiosity as a hallmark of successful transformation leaders.
  • Curious leaders are more willing to challenge assumptions, adapt strategy, and engage stakeholders.
  • Read the article →

Human + Machine: Reimagining Work in the Age of AI – IBM Institute for Business Value

  • Emphasizes that in the AI era, human skills like curiosity are vital complements to automation.
  • Curious individuals are better at interpreting data, asking better questions, and guiding AI to impactful outcomes.
  • Explore Human + Machine →

The Curiosity Gap: What Holds Teams Back

Despite its value, many organisations unintentionally stifle curiosity:

  • Rigid hierarchies discourage questioning.
  • Execution pressure leaves no room for reflection.
  • Fear of failure shuts down experimentation.
  • Overreliance on expertise limits fresh thinking.

These are culture issues, not people issues. Leaders play a pivotal role in changing this dynamic.

How Leaders Can Foster Curiosity

Transformation leaders can amplify curiosity in practical, powerful ways:

  • Ask more than tell: Use open-ended questions to spark exploration.
  • Normalize experimentation: Frame pilots and prototypes as learning opportunities.
  • Listen actively: Signal that new ideas and diverse perspectives are valued.
  • Reward growth: Recognize not just performance, but how people learn and adapt.
  • Lead with humility: Show you’re learning too—and invite others on the journey.

Final Word

Digital transformation is ultimately a human transformation. And curiosity is the mindset that keeps humans relevant, engaged, and future-ready.

It’s what helps a data analyst spot an emerging trend, a product manager test a radical new idea, and a CEO rethink a decades-old business model. It’s also what allows us to partner more effectively with AI—asking the right questions, interpreting signals, and imagining better solutions.

As you lead your organisation through transformation, don’t just invest in platforms and capabilities. Invest in curiosity. It’s the spark that turns potential into progress.

Balancing between Balcony and Dance Floor – Tip for Leadership in Digital Transformation

The “Balcony and Dance Floor” metaphor, introduced by Ronald Heifetz and Marty Linsky, offers a powerful framework for balancing hands-on leadership with strategic oversight. Leaders must be immersed in execution (the dance floor) while also stepping back to gain a broader perspective (the balcony). Striking this balance is crucial for digital transformation success.

Understanding the Metaphor in a Digital Transformation Context

  • The Dance Floor: This represents the daily execution of digital initiatives—overseeing system rollouts, engaging with teams, managing stakeholder concerns, and addressing immediate roadblocks. Leaders who remain solely on the dance floor risk being overwhelmed by operational challenges, losing sight of strategic priorities.
  • The Balcony: This vantage point provides the necessary space to assess overall progress, identify patterns, and anticipate challenges. A balcony perspective allows leaders to ensure that digital initiatives align with long-term business goals, rather than being reactive to short-term operational issues.

Applying the Concept to Digital Transformation Leadership

  1. Maintaining Strategic Alignment: Leaders must continuously step onto the balcony to ensure digital transformation initiatives align with broader business objectives. Without this, transformation efforts may become disjointed or lose executive sponsorship.
  2. Balancing Execution with Reflection: While hands-on engagement is necessary to drive momentum, leaders should also create time for reflection, whether through strategic reviews, executive meetings, or external benchmarking.
  3. Empowering Teams While Providing Vision: Leaders should guide digital transformation by setting a clear vision from the balcony but allow teams to execute with autonomy on the dance floor. This approach fosters innovation while maintaining alignment with the strategic roadmap.
  4. Leveraging Data and Insights: Digital transformation generates vast amounts of data. Leaders must use this data to inform their balcony perspective, identifying trends and adjusting strategies as necessary.
  5. Ensuring Adaptability: Transformation initiatives rarely go as planned. A leader’s ability to move between the dance floor and balcony ensures they can adjust strategies dynamically, responding to challenges without losing sight of the ultimate goal.

The Leadership Imperative

Effective digital transformation leaders seamlessly transition between execution and strategic reflection. Those who remain only on the dance floor risk micromanagement and burnout, while those who stay only on the balcony may become disconnected from execution realities. By mastering this balance, leaders can guide their organizations through digital transformation with clarity, resilience, and adaptability.

In an era of rapid technological evolution, adopting the “Balcony and Dance Floor” approach is more than a leadership technique—it is a necessity for driving sustainable digital change.

Enhance Project Success with Pre-Mortem Techniques

A pre-mortem is a proactive risk management exercise that helps teams anticipate potential failures before they occur. Unlike traditional risk assessments, which often focus on known risks, a pre-mortem encourages teams to imagine a scenario where the initiative has already failed and work backward to identify the causes. This method:

  • Uncovers hidden risks that might otherwise be overlooked.
  • Encourages open and candid discussions within teams.
  • Enhances risk mitigation strategies early in the process.
  • Strengthens team alignment and shared accountability for success.

What Are the Outcomes of a Pre-Mortem?

When executed effectively, a pre-mortem delivers several valuable outcomes:

  • A comprehensive list of potential failure points.
  • A prioritized risk register with mitigation actions.
  • Stronger team cohesion and ownership over the initiative’s success.
  • Improved decision-making, ensuring proactive rather than reactive responses to risks.

How to Execute a Pre-Mortem

Follow these structured steps to conduct an effective pre-mortem:

  1. Set the Stage: Gather the key stakeholders, including project sponsors, team leads, and operational experts. Ensure a psychologically safe environment where candid discussions are encouraged.
  2. Define the Scenario: Present the hypothetical situation: “It is six months (or an appropriate timeframe) in the future, and the project has completely failed. What went wrong?”
  3. Brainstorm Failure Points: Each participant individually lists reasons for failure, considering strategic, operational, and technical factors.
  4. Share and Categorize: Consolidate and group similar failure points into themes (e.g., governance issues, resource constraints, external disruptions).
  5. Prioritize Risks: Use voting, ranking, or a risk assessment matrix to determine which failure points are the most critical.
  6. Develop Mitigation Actions: For each high-priority risk, define preventive measures and contingency plans.
  7. Integrate into Governance: Assign ownership for risk monitoring and integrate these insights into ongoing project reviews.

When and With Whom Should You Conduct a Pre-Mortem?

  • When: Ideally, before finalizing the transformation strategy or at key milestones in major initiatives (e.g., post-planning, before execution phases, during major pivots).
  • With Whom: A cross-functional group including executives, project managers, functional leads, risk officers, and frontline implementers.

By embedding the pre-mortem approach into your transformation governance, you significantly improve the likelihood of success by proactively identifying and addressing risks before they materialize.

This technique not only improves project outcomes but also builds stronger teams through enhanced communication and psychological safety.

The Right Question: Importance of Defining Problems for Effective AI and Digital Solutions


Why Problem Definition is Critical in Digital Transformation

In the rush to adopt digital and AI solutions, many organizations fall into a common trap—jumping straight to implementation without clearly defining the problem they aim to solve. This often leads to expensive failures, misaligned solutions, and wasted effort.

Defining the right problem is not just an operational necessity but a strategic imperative for executives leading digital transformation. A well-framed problem ensures that technology serves a real business need, aligns with strategic goals, and delivers measurable impact.

As Albert Einstein famously noted:
“If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.”

This article presents a practical framework for defining problems effectively—leveraging structured problem-solving methods such as Lean Thinking’s “5 Whys,” root cause analysis, and validated learning to guide better decision-making.


A Practical Framework for Problem Definition

Step 1: Identify the Symptoms

A common mistake is confusing symptoms with root problems. AI or digital solutions often get deployed to address surface-level inefficiencies, but without understanding their underlying causes, organizations risk treating the wrong issue.

  • Gather data and observations:
    Use operational data, system logs, financial reports, and performance metrics to identify inefficiencies or gaps.
  • Leverage customer and employee feedback:
    Conduct surveys, analyze customer support transcripts, and interview employees to gain qualitative insights.
  • Avoid rushing to conclusions:
    Be wary of “obvious” problems—many inefficiencies stem from deeper systemic issues.

💡 Example: A retail company notices declining online conversion rates. Instead of assuming they need a chatbot for engagement, they investigate further.


Step 2: Uncover the Root Causes

Once symptoms are identified, the next step is to determine their underlying cause.

  • Use the “5 Whys” technique:
    Repeatedly ask “Why is this happening?” until you uncover the fundamental issue.
  • Employ Fishbone (Ishikawa) Diagrams:
    Categorize possible causes into key areas such as process inefficiencies, technology gaps, and human factors.
  • Conduct stakeholder workshops:
    Cross-functional teams bring diverse perspectives that help uncover hidden issues.

💡 Example: A financial services company automates loan approvals to reduce delays. But using the “5 Whys,” they realize the real issue is fragmented customer data across legacy systems, not just a slow approval process.


Step 3: Craft a Clear Problem Statement

Once the root cause is determined, the problem must be precisely defined to ensure alignment and clarity.

  • Use the “Who, What, Where, When, Why, How” framework:
    Articulate the problem in a structured manner.
  • Make the statement SMART (Specific, Measurable, Achievable, Relevant, Time-bound):
    Avoid vague, high-level issues that lead to unfocused solutions.
  • Tie the problem to business impact:
    How does this problem affect revenue, efficiency, customer satisfaction, or competitive advantage?

Example Problem Statement:
“The customer support team’s average resolution time is 15 minutes, which is 5 minutes over our goal, due to the lack of a centralized customer knowledge base. This is leading to lower customer satisfaction and higher support costs.”


Step 4: Validate the Problem

Before investing in a full-scale solution, the problem definition must be validated to ensure it is correctly framed.

  • Test assumptions through small-scale experiments or prototypes:
    A/B testing, proof-of-concepts, or simulations can validate whether solving this problem has the expected impact.
  • Gather feedback from stakeholders:
    Ensure alignment across business units, IT teams, and end users.
  • Iterate if needed:
    If the problem statement doesn’t hold up under real-world conditions, refine it before proceeding.

💡 Example: A hospital wants AI-driven diagnostics to reduce misdiagnoses. A pilot project reveals that inconsistent patient data, not diagnostic errors, is the real issue—shifting the focus to data standardization rather than AI deployment.


Conclusion: Problem Definition as a Competitive Advantage

Executives must ensure that problem definition precedes solution selection in digital transformation. By following a structured framework, leaders can avoid costly missteps, align digital investments with business priorities, and drive real impact.

The best AI or digital solution in the world cannot fix the wrong problem. Taking the time to define the problem correctly is not just best practice—it’s a competitive advantage that enables sustainable transformation and long-term success.


What’s Your Experience? Let’s Continue the Conversation!

How do you approach problem definition in your digital and AI initiatives? Have you faced challenges in aligning solutions with real business needs?

💬 Join the conversation in the comments below or connect with me to discuss how your organization can improve its problem-definition process.

📩 Subscribe to my newsletter on LinkedIn https://bit.ly/3CNXU2y for insights on digital transformation and leadership strategies.

🔍 Need expert guidance? If you’re looking to refine your digital or AI strategy, let’s connect—schedule a consultation to explore how we can drive transformation the right way.


The Power of Clarity: Why Clear RACIs Are Essential for Successful Transformations

One of the biggest challenges in implementing transformations and new processes is defining who is responsible for what. Unclear roles can lead to inefficiencies, confusion, and delays—both during the transition phase and once the new process is fully operational. To avoid these pitfalls, organizations must establish clear RACI (Responsible, Accountable, Consulted, and Informed) matrices upfront.

The Role of RACI in the Implementation Phase

During the implementation phase of a transformation, multiple teams and individuals must collaborate effectively. Without a well-defined RACI, responsibilities can overlap or fall through the cracks, leading to bottlenecks and misalignment. Here’s how a well-structured RACI enhances the transition phase:

  1. Clear Accountability: Identifies who owns each task, ensuring that decisions are made efficiently.
  2. Defined Responsibilities: Distinguishes between those executing the work (Responsible) and those ensuring it is done correctly (Accountable).
  3. Seamless Collaboration: Engages key stakeholders (Consulted) for input without causing unnecessary delays.
  4. Effective Communication: Keeps relevant parties (Informed) updated, reducing misunderstandings and redundant efforts.

By establishing a clear RACI at the outset, organizations can drive smoother transitions, reduce resistance, and keep projects on track.

The Importance of RACI in the End State

Once the new process is fully implemented, maintaining role clarity is just as critical. Many transformation efforts stumble post-implementation due to a lack of sustained accountability. A well-defined RACI ensures:

  1. Operational Efficiency: Employees understand their ongoing responsibilities, reducing friction in daily operations.
  2. Consistent Decision-Making: Clear lines of accountability ensure that decisions are made efficiently and by the right stakeholders.
  3. Sustained Process Adoption: By assigning ownership, organizations can ensure that new processes remain effective and continuously improved.
  4. Reduced Role Ambiguity: Employees feel confident in their responsibilities, leading to higher engagement and performance.

Best Practices for Implementing RACIs

  1. Engage Stakeholders Early: Involve key players in defining roles to ensure buy-in and practical alignment.
  2. Keep It Simple and Actionable: Avoid overly complex RACIs that create confusion rather than clarity.
  3. Review and Adapt: RACIs should be dynamic, evolving with organizational needs and process improvements.
  4. Communicate and Train: Ensure that all stakeholders understand their roles and how they contribute to the transformation’s success.

Conclusion

Defining clear RACIs is not a bureaucratic exercise—it is a strategic enabler for transformation success. By ensuring clarity in responsibilities during both the implementation phase and the steady state, organizations can drive accountability, efficiency, and long-term sustainability. Investing time upfront in a well-structured RACI matrix pays dividends in reducing friction and ensuring transformation efforts deliver lasting impact.

See Do Teach Method – A Powerful Approach to Learning and Capability Building

The See Do Teach method is a transformative approach to skill acquisition, team development, and leadership building. Rooted in experiential learning, it creates a dynamic cycle of observation, practice, and instruction that ensures not only the mastery of tasks but also the empowerment of individuals to become educators themselves. Here’s why it works so effectively and how it can be applied.


Why the See Do Teach Method Works

The See Do Teach method is built on the principle of active engagement, which is proven to improve retention and understanding. Each stage builds on the last, creating a progressive learning pathway that embeds skills deeply: 1. Observation Enhances Understanding: Seeing a task performed by an expert provides learners with a clear example of success, demystifying the process and showcasing best practices. 2. Practice Solidifies Skills: Doing the task immediately after observing allows learners to apply their newfound knowledge in a safe environment, with room for feedback and improvement. 3. Teaching Deepens Expertise: Explaining and demonstrating a skill to others reinforces the teacher’s mastery and ensures that knowledge is disseminated effectively across teams.


Breaking Down the Steps

Step 1: See

Observation is the foundation of the See Do Teach method. In this stage, learners watch a skilled individual perform a task, noting critical steps, techniques, and nuances.

Example: In order to train people inside the organization to become transformation managers, I worked with one of the big 4 strategic consultancies to show actual projects in the organization to our candidates.

Step 2: Do

After observing, learners move on to hands-on practice. Here, they replicate the task under guidance, applying what they’ve seen while gaining firsthand experience.

Example: After observing the expert consultants on one or two projects, the roles changed, with the internal teams executing the projects and the expert consultants reviewing and giving advice.

Step 3: Teach

The final stage involves teaching the newly learned skill to others. This step requires learners to organize their understanding and communicate it effectively, cementing their knowledge.

Example: After executing a couple of projects themselves, the internal teams became teachers to the next cohort of candidates (and the external consultants phased out) in their See-Do cycle.


The Flywheel Effect

The See Do Teach method operates as a flywheel—a self-reinforcing cycle that gains momentum over time. As learners become teachers, they perpetuate the process, creating a culture of continuous learning and growth. Over time, this approach not only spreads knowledge but also cultivates leadership qualities and drives organizational excellence.

Example in Practice: A company adopts the See Do Teach method to train employees on a new software system. Initially, a few experts demonstrate its usage (See). Next, these employees practice and refine their skills (Do). Finally, they teach the system to others (Teach). Within weeks, the organization’s proficiency with the software grows exponentially, reducing reliance on external trainers and fostering a collaborative learning environment.


Conclusion

The See Do Teach method is a simple yet profound approach to learning that combines observation, hands-on practice, and teaching. By embedding this cycle into your organization or personal development strategies, you can create a robust framework for skill acquisition, team growth, and leadership development. Over time, the method becomes a powerful flywheel, driving sustainable success and empowering individuals to achieve their full potential.