Measuring What Matters: Outcomes, Not Activity, in NHS Health Inequalities Work

NHS community services are often data-rich but insight-poor when it comes to health inequalities. Activity metrics such as referral numbers, contacts and caseload size can mask persistent inequity in outcomes. Commissioners and regulators are increasingly focused on whether services improve stability, independence, safety and wellbeing for people who face the greatest barriers. This article supports Health Inequalities, Access & Inclusion and aligns with NHS Community Service Models and Pathways, because outcomes must be understood across the whole pathway, not just at individual intervention points.

Why activity data alone reinforces inequality

Activity measures are easy to collect and compare, but they rarely show whether services are effective for people with complex lives. High contact rates can coexist with poor outcomes if engagement is unstable, care plans are misunderstood, or support is withdrawn prematurely. For underserved groups, outcomes often worsen silently: repeated missed appointments, crisis escalation, safeguarding concerns, and re-entry into services after discharge.

Outcome-focused inequality work asks different questions. Are people more stable at discharge? Are crises reduced? Is risk better managed? Are people able to maintain engagement without escalation? These questions require services to track change over time and to disaggregate results for groups who experience disadvantage.

Operational example 1: Using stability outcomes to assess pathway effectiveness

Context: A community long-term conditions service reported high activity levels but continued high emergency admissions among people in the most deprived areas.

Support approach: The service introduced a “stability outcome” framework alongside activity reporting, focusing on what had changed for individuals by the end of support.

Day-to-day delivery detail: Clinicians recorded a short stability assessment at entry and discharge, covering symptom control, self-management confidence, medication adherence, and access to planned support. Outcomes were coded using simple categories (improved, unchanged, deteriorated) with free-text rationale. Data was reviewed monthly by deprivation quintile and by referral route.

How effectiveness or change is evidenced: The service identified that people from deprived areas were more likely to be discharged without improved stability due to early disengagement. In response, the pathway introduced extended follow-up for flagged cohorts. Subsequent reviews showed improved stability scores and a reduction in unplanned admissions within 60 days of discharge.

Operational example 2: Measuring engagement quality, not just attendance

Context: A community mental health interface pathway focused heavily on reducing DNAs, but commissioners questioned whether attendance alone reflected meaningful engagement.

Support approach: The service developed an engagement quality measure to sit alongside attendance metrics.

Day-to-day delivery detail: Clinicians recorded whether agreed actions were understood, whether follow-up was completed, and whether the individual knew how to seek help if things worsened. Supervisors sampled records to check consistency and discussed cases where attendance was high but engagement quality was low.

How effectiveness or change is evidenced: The service found that some cohorts attended appointments but did not implement care plans due to literacy, language or anxiety barriers. Targeted adjustments improved engagement quality scores and reduced crisis presentations linked to misunderstanding or lack of follow-through.

Operational example 3: Linking outcomes to safeguarding and risk reduction

Context: A community nursing pathway struggled to evidence impact for people experiencing repeated safeguarding concerns related to self-neglect.

Support approach: The service linked safeguarding data to outcome measurement rather than treating it as a separate domain.

Day-to-day delivery detail: For individuals with safeguarding involvement, outcomes included reduced frequency of alerts, improved adherence to care plans, and increased planned contact. MDT panels reviewed whether pathway changes (frequency of visits, communication adaptations, joint working) were associated with reduced risk.

How effectiveness or change is evidenced: Over time, the service demonstrated fewer repeat safeguarding referrals and more sustained engagement for high-risk cohorts, providing commissioners with defensible evidence of impact.

Commissioner expectation: Outcomes linked to inequality priorities

Commissioner expectation: Commissioners increasingly expect outcome measures to reflect inequality priorities, not just generic service success. This includes demonstrating that outcomes are improving for people from deprived communities, people with disabilities, and other groups facing barriers. Commissioners will scrutinise whether outcome measures drive pathway change or simply report historic performance.

Regulator / Inspector expectation: Evidence that care makes a difference

Regulator / Inspector expectation (CQC): CQC expects services to evaluate whether care is effective and person-centred. Inspectors look for evidence that outcomes are understood for different groups and that services learn when outcomes are poorer for disadvantaged people. Outcome measurement should inform quality improvement, risk management and safeguarding assurance.

Governance and assurance: embedding outcomes into routine review

Outcome measurement becomes credible when it is embedded into governance. Effective services use outcome dashboards alongside access and activity data, review variance at MDT or quality meetings, and document decisions made in response. This allows services to evidence not only what outcomes are achieved, but how learning is translated into safer, more equitable pathway design.