How AI is Transforming Organisational Memory – Defence Sector and UK Emergency Services Doctrine

April 28, 2026

No content found

In the first article of this 4-part series we explored aviation’s blueprint for AI-enhanced Organisational Learning – a model built on systematic safety, layered protection, and continuous operational monitoring. These principles demonstrate how AI can actively support decision-making in safety-critical environments rather than simply archiving past experience.

In Part 2, we examine how defence organisations apply similar principles in complex operational environments, and how UK emergency services are developing their own doctrinal approaches to AI that balance innovation with operational reality.

Defence Sector: Human-Machine Teaming in Complex Environments

The UK MoD’s Defence AI Playbook and Strategy demonstrate how intelligent systems should augment human decision-making in high-stakes environments. The MOD recognises that “AI has enormous potential to enhance our capabilities, improve productivity and maximise our strategic advantage,” with a clear vision to become “the world’s most effective, efficient, trusted and influential Defence organisation for its size” in terms of AI adoption.

Similarly, the US Air Force’s doctrine on Artificial Intelligence emphasises that “military discretion remains with Airmen, but AI can enable faster and superior operational decisions.” The doctrine outlines three distinct human-machine teaming constructs:

  • “human-in-the-loop” (machine recommends, person decides)
  • “human-on-the-loop” (machine recommendation implements unless vetoed)
  • “human-off-the-loop” (machine decides without override)

For Lesson Management systems in emergency services, the human-in-the-loop model proves most appropriate. AI provides relevant lessons, identifies patterns across incidents, and suggests connections… but incident commanders, tactical advisors, clinical leads, and operational crews retain full authority over how that intelligence informs their response.

The UK MOD’s Playbook highlights practical challenges in AI implementation that directly apply to Organisational Learning: limited data quality from legacy systems, the need for edge processing in resource-constrained environments, and the challenge of fusing data from multiple sources. Their satellite imagery analysis capability exemplifies the principle that “the analyst remains responsible for interpreting what is seen to deliver actionable insight to the frontline”, and thus AI flags potential areas of interest, but human expertise makes the critical judgments – a principle equally applicable when AI identifies potentially relevant lessons from previous incidents.

Perhaps the most valuable argument in this context is the US Air Force’s emphasis on “AI fluency” rather than mere AI literacy. Personnel must develop “proficiency beyond basic literacy to comprehend the application, interpretation, and effective navigation of AI systems.” For emergency services, this means developing operational personnel and commanders who understand not just how to use Lesson Management systems, but how AI-powered tools enable insights and why certain recommendations emerge from the accumulated experience of the service.

UK Emergency Services: Doctrinal Approaches to AI

Across UK Blue Light services, organisations are developing robust frameworks for AI adoption grounded in operational reality and ethical considerations.

The National Fire Chiefs Council’s (NFCC) AI and Digital Ethics Framework demonstrates how emergency services are approaching intelligent systems with doctrinal rigour. The NFCC situates AI adoption in principles of “transparency, inclusion, fairness, and safety, drawing from public sector experience and international standards, but tailored to operational realities like frontline trust, public legitimacy, and risk-critical environments.”

This NFCC approach emphasises that AI doctrine provides “clarity of purpose, demanding that every deployed tool directly supports life-saving outcomes, community safety, or firefighter wellbeing.” Critically, the framework stresses that “the aim is to enhance, not replace, the professionalism and judgment of personnel.”

Policing is similarly advancing AI capabilities within careful ethical boundaries. The National Police Chiefs’ Council (NPCC) and College of Policing guidance emphasises human oversight, algorithmic transparency, and fairness – particularly important factors given policing’s use of AI in areas like facial recognition, predictive analytics, and intelligence analysis. The principle that operational officers retain decision-making authority is both critical and fundamental.

NHS and ambulance services are similarly exploring AI across clinical decision support, demand prediction, and resource optimisation. NHS AI initiatives emphasise clinical safety, patient consent, and the principle that AI augments rather than replaces clinical judgment – paralleling the frameworks established in aviation and defence.

This principle applies equally to AI-enhanced Lessons Management across all Blue Light services. When a watch manager reviews lessons from previous incidents involving electric vehicle fires, a custody sergeant examines patterns from previous cell deaths in custody, or a paramedic accesses clinical learning from similar cardiac presentations – AI can provide relevant patterns and recommendations. But the operational and clinical decision-making remains firmly with qualified emergency service personnel who understand the unique dynamics of the current situation.

The US Department of Homeland Security research confirms that first responders “express confidence in AI-supported decision tools with ‘human-in-the-loop’ oversight.” Emergency management exercises confirmed responders’ desire for AI to “process multiple information feeds and alert them to rapid changes impacting operational plans, enabling faster, more accurate decisions during dynamic incidents.”

Across all services, the emphasis on robust governance structures (encompassing transparent AI procurement, validation, and monitoring) mirrors best practices from aviation and defence. Algorithmic decisions must be “auditable, explainable, and always subject to human oversight,” with openness to staff and communities about how systems function and what safeguards exist.

In Part 3, we’ll examine the critical elements that make AI systems trustworthy in operational environments – transparency, explainability, fail-safe design, and data quality. These aren’t just technical considerations; they’re the foundation of operational confidence and effectiveness across all emergency services.


Other articles in this series may be accessed below as they are published: