How Registered Managers Demonstrate Accountability for Weak Audit and Quality Assurance Systems

Audits are meant to show whether a service is safe, consistent and well-led. However, many services complete audits without those audits actually changing practice. Boxes are ticked, forms are completed, but risks remain. When this happens, accountability sits with the Registered Manager. The key question is whether audit systems identify real issues, lead to action and improve outcomes. For further guidance, see our Registered Manager accountability guidance, CQC quality statements resources and CQC compliance knowledge hub.

Why this matters

Weak audit systems create a false sense of control. Issues may exist in care delivery, but they are not identified clearly or escalated appropriately. This means risks continue without management intervention.

It also affects credibility. If audits show high compliance but incidents, complaints or observations show otherwise, the service appears unreliable. This makes it difficult to evidence effective leadership.

Strong Registered Manager accountability means audits are accurate, consistent and lead to measurable improvement. It also means that audit findings are challenged, not just recorded.

Clear framework for accountable quality assurance

An effective audit system should identify issues, record them clearly and ensure action is taken. This requires audits that reflect real practice, not assumptions. It also requires clear ownership of follow-up actions.

The Registered Manager must be able to show that audit findings are reviewed, prioritised and tracked to completion. This creates a clear link between audit activity and service improvement.

Accountability is strongest when audits align with other evidence, including incidents, supervision and feedback. This demonstrates that governance systems are working together rather than in isolation.

Operational example 1: Audit fails to identify poor record quality

Step 1. The senior staff member completes a file audit, reviews daily notes for completeness and accuracy, and records audit findings, including any missed entries or inconsistencies, in the documentation audit tool.

Step 2. The deputy manager reviews audit outcomes, compares them with incident reports and identifies discrepancies between recorded compliance and actual issues, then records findings in the governance tracker.

Step 3. The Registered Manager reviews conflicting evidence, determines whether audit quality is reliable and records decisions, including re-audit or staff accountability actions, in the quality assurance log.

Step 4. The audit process is revised to include clearer criteria and sampling methods, with changes recorded in the audit framework and communicated to staff through briefing records.

Step 5. The Registered Manager reviews updated audit results, checks whether improvements reflect real practice and records trends, outcomes and further actions in governance meeting minutes.

What can go wrong is that audits reflect what should be happening rather than what is actually happening. Early warning signs include high audit scores with ongoing incidents or complaints. Escalation may involve re-auditing, changing audit tools or reviewing staff competency. Consistency is maintained through clearer criteria and cross-checking evidence.

Governance should audit audit quality itself, including sampling accuracy and alignment with incidents. Deputies review weekly, the Registered Manager reviews monthly and provider oversight reviews trends. Action is triggered by repeated discrepancies between audit results and real outcomes.

The baseline issue is often that audits are completed but not reliable. Improvement can be measured through more accurate findings, better alignment with practice and clearer action tracking. Evidence comes from audit reports, incident logs and supervision records.

Operational example 2: Audit actions are identified but not completed

Step 1. The auditor identifies issues during a routine audit, assigns actions with clear deadlines and records each action, responsible person and expected outcome in the audit action log.

Step 2. The responsible staff member completes assigned actions within the required timeframe and records completion details, including evidence of change, in the audit action tracker.

Step 3. The deputy manager reviews action completion status, identifies overdue or incomplete tasks and records escalation decisions and revised deadlines in the governance tracker.

Step 4. The Registered Manager reviews persistent non-completion, determines whether additional support or accountability measures are required and records decisions in the quality assurance oversight log.

Step 5. The Registered Manager reviews overall action completion rates, identifies trends and records service improvements and monitoring arrangements in governance meeting minutes.

What can go wrong is that actions are recorded but not followed through. Early warning signs include repeated overdue actions and unchanged audit results. Escalation may involve supervision, performance management or increased oversight. Consistency is maintained through action tracking and regular review.

Governance should audit action completion, timeliness and effectiveness. Deputies review weekly, the Registered Manager reviews monthly and provider oversight reviews trends. Action is triggered by overdue actions or repeated audit failures.

The baseline issue is often that actions exist but are not completed. Improvement can be measured through higher completion rates and reduced repeat issues. Evidence comes from audit logs, governance records and supervision files.

Operational example 3: Audit systems do not reflect real service risks

Step 1. The deputy manager reviews incident trends, identifies high-risk areas not covered by current audits and records gaps in the audit coverage review document.

Step 2. The Registered Manager updates the audit schedule to include high-risk areas, ensures new audit criteria are defined and records changes in the audit framework and governance documentation.

Step 3. The assigned auditor completes the revised audit, focusing on identified risk areas and records findings, including compliance and gaps, in the updated audit tool.

Step 4. The Registered Manager reviews audit findings, checks alignment with known risks and records decisions, including service changes or further investigation, in the quality assurance log.

Step 5. The Registered Manager reviews audit coverage regularly, ensures it remains aligned with service risks and records updates and monitoring arrangements in governance meeting minutes.

What can go wrong is that audits focus on routine areas and miss emerging risks. Early warning signs include incidents in areas not audited and outdated audit tools. Escalation may involve revising audit scope or increasing oversight. Consistency is maintained through regular review of audit coverage.

Governance should audit alignment between audit scope and service risks. Managers review audit plans, the Registered Manager reviews trends and provider oversight reviews coverage. Action is triggered by gaps in audit coverage or repeated incidents.

The baseline issue is often that audits do not reflect real risks. Improvement can be measured through better alignment, reduced incidents and clearer oversight. Evidence comes from audit plans, incident data and governance records.

Commissioner expectation

Commissioners expect audit systems to provide accurate and reliable information about service quality. They want evidence that audits identify issues, actions are completed and improvements are sustained. This includes clear documentation and visible follow-up.

They are also likely to assess whether audit findings align with other evidence. A strong service can demonstrate consistency between audits, incidents and outcomes.

Regulator / Inspector expectation

Inspectors will review audit systems to confirm that they reflect real practice. They will compare audit findings with observed care, incidents and records to assess reliability.

If audits are inaccurate or do not lead to action, accountability is weakened. If audits are effective and lead to improvement, leadership is easier to evidence.

Conclusion

Audit systems are a key part of Registered Manager accountability. They provide the evidence needed to show that services are safe, consistent and improving. When audits are weak, risks remain hidden and governance becomes unreliable.

Strong audit systems identify real issues, lead to action and demonstrate improvement. They also align with other evidence, creating a clear picture of service quality.

Accountability becomes visible when audit findings, actions and outcomes all connect. This ensures that governance is not just a process, but a practical tool for improving care delivery and maintaining safe, effective services.