Serious Incident Reviews in Adult Social Care: Getting RCA Right and Proving Change

Serious incidents test a provider’s culture, systems and leadership. Done well, a serious incident review strengthens safety and accountability and demonstrates robust learning, incidents and continuous improvement under strong governance and leadership. Done poorly, it becomes a paperwork exercise that fails to prevent repeat harm and leaves commissioners and CQC unconvinced.

This article explains how adult social care services should plan and deliver proportionate serious incident reviews (including root cause analysis), how to turn findings into operational change, and how to evidence impact over time.

What counts as a “serious incident” in adult social care

Definitions vary by commissioner and setting, but serious incidents commonly include:

  • Unexpected death or significant avoidable harm
  • Serious safeguarding concerns, including abuse or neglect
  • Serious medication events, including overdose or missed critical medicines
  • Serious falls with injury, hospital admission or repeated near-misses
  • High-impact restrictive practice concerns or unlawful restrictions
  • Widespread system failures, for example staffing collapse or unsafe premises issues

The operational test is whether the incident indicates a meaningful failure of controls, supervision, assessment, or escalation processes — not just whether it is “headline” serious.

Set the review up properly: scope, roles and evidence capture

Serious incident reviews fail most often because they start late, lack a clear scope, or rely on partial evidence. A good set-up includes:

  • A named lead (with sufficient seniority and independence)
  • Clear terms of reference: what questions the review will answer
  • Evidence list: records, rosters, supervision notes, audits, care plans, call logs
  • Immediate risk controls while the review is underway
  • A timeline and governance route for oversight and sign-off

Independence matters: if the reviewing manager is too close to the incident, findings can become defensive rather than analytical.

Operational example 1: Serious fall prompts system-level change

Context: A care home experiences a serious fall at night resulting in hospital admission. The resident had previous near-misses and a known history of nocturnal wandering.

Support approach: The service initiates a serious incident review with clear scope: assessment accuracy, night staffing response, environmental factors, and escalation to clinical teams.

Day-to-day delivery detail: The review maps the resident’s risk assessment history, examines night checks and call bell response times, and compares staffing levels and skill mix against acuity. It finds that the falls risk assessment was generic, the night team relied on informal knowledge rather than written prompts, and corridor lighting created shadows that increased disorientation. Immediate controls are introduced (enhanced night checks, clear bedside prompts, temporary sensor use where appropriate). Longer-term actions include rewriting the night risk checklist, introducing a structured night handover that flags time-specific risks, and changing lighting and signage.

How effectiveness or change is evidenced: Governance tracks falls trends for night hours specifically, plus audit checks of new night handover records and risk assessment quality. Repeat serious falls reduce, and the provider can show the audit trail from review to change.

Root cause analysis: avoid the “single cause” trap

RCA should not aim to find one person or one error. Serious incidents usually result from multiple contributory factors. Providers should explore:

  • Assessment quality and whether it reflected current need
  • Care planning clarity and accessibility for frontline staff
  • Staff competence, supervision and decision-making support
  • Staffing levels, stability, and the use of agency staff
  • Environment and equipment, including maintenance issues
  • Escalation pathways: who contacted whom, when, and why
  • Governance signals: what data existed and whether leaders acted

Use a structured method (for example contributory factor frameworks) but keep the language plain and operational so staff and commissioners can follow the logic.

Operational example 2: Safeguarding incident exposes weak escalation

Context: A safeguarding concern is raised after allegations of rough handling in a supported living service. The incident appears isolated until review.

Support approach: The provider runs a serious incident review focusing on supervision, staffing stability, reporting culture and restrictive practices.

Day-to-day delivery detail: Reviewers find staff were unsure what constituted a reportable safeguarding concern and used “quiet fixes” rather than formal reporting. Supervision records show limited reflective discussion about practice under stress. The provider introduces a clear escalation flowchart, strengthens induction content, adds scenario-based training, and implements targeted practice observations for high-risk shifts. The service also introduces a requirement that any allegation triggers a same-day management review and documented safeguarding threshold decision.

How effectiveness or change is evidenced: The provider tracks safeguarding concerns and near-misses, showing improved reporting quality, faster decision-making and documented threshold rationale, supported by supervision audits.

Operational example 3: Medication incident leads to targeted assurance controls

Context: A person supported misses critical medicines over a weekend due to a handover breakdown between two teams.

Support approach: The provider reviews system controls: MAR processes, pharmacy ordering, weekend coverage and escalation rules.

Day-to-day delivery detail: The review finds handover notes were inconsistent, weekend staff were unfamiliar with ordering routes, and no one owned “critical meds” checks. The provider introduces a critical medicines register, a Friday verification step, and a weekend escalation checklist that requires staff to contact the on-call manager if stock is below a defined threshold. Training is targeted to weekend staff and agency workers, supported by short competency checks.

How effectiveness or change is evidenced: Audits measure compliance with the register and escalation checklist. Governance reviews show reduced missed-dose incidents and improved evidence of proactive ordering.

Commissioner expectation

Commissioner expectation: Commissioners expect serious incidents to be reviewed promptly, proportionately and transparently, with clear actions, timelines, named owners and evidence of sustained improvement.

Regulator / Inspector expectation

Regulator / Inspector expectation (CQC): CQC expects providers to learn from serious incidents, take action that reduces risk, and demonstrate effective leadership oversight with a clear audit trail from incident to improvement.

Governance oversight: turning reviews into organisational learning

Serious incident reviews should not end when actions are listed. Governance must ensure:

  • Actions are completed on time and quality-checked
  • Impact measures are defined (not just “training completed”)
  • Learning is shared across services, not contained locally
  • Repeat incidents trigger escalation and deeper system review

Providers strengthen defensibility when they can demonstrate how one serious incident improved wider systems and reduced risk across the organisation.