Building a Service User Feedback and Co-Production System That Stands Up to Scrutiny

In adult social care, service user feedback and co-production only become “real” when they consistently change day-to-day practice and can be evidenced under challenge. Done properly, it reduces complaints, improves outcomes, and strengthens safeguarding and rights-based practice. Done poorly, it becomes a box-ticking exercise that fails at inspection and in commissioner reviews. This article sets out an operationally credible approach, aligned with service user feedback and co-production practice and framed within quality standards and assurance frameworks so your learning and improvements are visible, defensible, and repeatable.

What “good” looks like in practice

A strong feedback and co-production system has three characteristics:

  • Multiple routes to voice (not just surveys): structured, unstructured, independent, and accessible options.
  • Clear decision pathways: who reviews what, when, and what happens next.
  • Evidence of change: “You said / We did / What changed / What we measured after” with dates, owners, and outcomes.

The focus is not volume of feedback; it is whether feedback changes risk management, daily support, staff practice, and quality outcomes.

Design the system around rights, safeguarding and restrictive practice oversight

Feedback and co-production must connect to safeguarding, Mental Capacity Act decision-making, and restrictive practice monitoring. People are most likely to be harmed or unheard at the “edges” of a service: during transitions, in crisis, when communication is limited, or when behaviours challenge. The system should therefore:

  • Prioritise accessible communication (easy read, symbols, audio, interpreter support, and choice of setting).
  • Separate “service experience feedback” from “safeguarding disclosures” while ensuring escalation routes are explicit.
  • Build routine prompts into support planning reviews: “What matters to you this month?” and “What is making life harder?”
  • Track restrictive practice concerns as feedback themes (e.g., locked cupboards, staffing patterns, denied activities) and route to governance.

Governance that makes co-production credible

Co-production becomes credible when it is governed like any other quality process. A practical model:

  • Monthly feedback review led by the Registered Manager: themes, risks, quick wins, and escalation items.
  • Quarterly co-production forum with minutes: service users, families/advocates, operational leads, and where appropriate commissioning liaison.
  • Quality and Safety meeting (or equivalent): feedback themes triangulated with incidents, audits, complaints, and KPI trends.
  • Board or senior leadership oversight: headline themes, repeat issues, and assurance that actions were completed and effective.

Governance should define timescales (e.g., urgent issues within 24–72 hours; quality issues within 28 days), evidence requirements (action logs and outcomes), and ownership (named leads and due dates).

Operational example 1: Feedback-driven change to staffing deployment

Context: In a supported living service, two people reported anxiety and distress during late afternoon periods when agency staff were commonly used. Feedback came through keyworker sessions and was echoed by families.

Support approach: The manager treated this as a quality and risk issue, not “preference”. The service introduced a structured feedback capture for two weeks (daily prompts, easy-read mood scale, and family call log). Findings were discussed at a co-production forum with individuals and families, alongside staffing rotas and incident notes.

Day-to-day delivery detail: The rota was redesigned so known staff covered “predictable pinch points” (16:00–20:00). Agency use was restricted to mornings only where possible. Staff handovers included a brief “what matters today” note (preferred routines, anxiety triggers, and early signs). Keyworkers checked in after tea-time using the same accessible mood scale to confirm whether changes helped.

How effectiveness was evidenced: The service compared incident frequency (distress calls and PRN requests) over 6 weeks pre/post change, and recorded improved mood ratings across five consecutive weeks. The action log linked the feedback theme to the rota change, with follow-up evidence and a decision to keep the model.

Operational example 2: Co-producing a safer medication support routine

Context: A person reported feeling “rushed” and confused during medication prompts, leading to refusals and occasional missed doses. The person’s feedback was supported by an advocate who noted the person’s preference for privacy and clear explanations.

Support approach: The service arranged a co-production meeting including the person, advocate, a senior support worker, and (where appropriate) clinical input from pharmacy/GP liaison. The aim was to redesign the routine so it supported choice, dignity, and safety.

Day-to-day delivery detail: The service introduced a consistent time window chosen by the person, a quieter location, and a step-by-step prompt card in accessible format. Staff agreed to avoid “stacked prompts” (medication plus personal care plus household tasks) and to use a calm script. The person helped decide what information they wanted each time (name of medication, purpose, and what to do if they felt unwell). Refusals were recorded with a short reason code agreed with the person (“not ready”, “side effects”, “want to talk first”), triggering a planned follow-up rather than repeated prompting.

How effectiveness was evidenced: Medication refusal rates reduced over the next month, with improved consistency recorded on MAR checks. The person reported greater control and reduced anxiety in weekly keyworker sessions, and the change was captured as “You said / We did / Impact” with dates and signatures.

Operational example 3: Feedback leading to better community access and reduced restrictions

Context: Several people reported that planned community activities were regularly cancelled due to staffing shortages or risk concerns. Staff logs showed repeated “unable to facilitate” notes. People described feeling “stuck” and disengaged.

Support approach: The manager treated this as both a quality of life and restrictive practice risk (de facto restriction through non-delivery). The service ran a two-week co-production exercise: people chose their top three community priorities and identified barriers (time, transport, staffing confidence, anxiety triggers, and funding).

Day-to-day delivery detail: The rota was adjusted to protect activity time, and staff were trained in graded exposure planning and travel confidence. Risk assessments were rewritten with the person, focusing on positive risk-taking: clear control measures, agreed escalation steps, and “what good looks like” outcomes. A simple weekly schedule was displayed in accessible format, and cancellations were recorded with a required manager review if repeated.

How effectiveness was evidenced: Delivery was tracked (planned vs completed activities), with reasons for cancellations and remedial actions logged. Quality of life indicators improved (self-reported satisfaction, reduced agitation, fewer “boredom-related” incidents). The service could show the link between feedback, revised risk management, and measurable impact.

Commissioner expectation: evidenceable learning and improvement

Commissioner expectation: Commissioners typically expect feedback and co-production to be demonstrably linked to outcomes, quality improvement, and contract monitoring. Practically, this means you should be able to provide:

  • A clear feedback policy and process map (routes, timescales, escalation).
  • Action logs showing what changed, by whom, and by when.
  • Impact evidence (KPIs, incident trends, delivery measures, and service user-reported outcomes).
  • Examples where feedback influenced service delivery decisions (staffing, routines, community access, risk plans).

The commissioner test is simple: “Show us a theme, show us what you changed, and show us how you know it worked.”

Regulator / inspector expectation: involvement, responsiveness, and safe care

Regulator / inspector expectation (e.g., CQC): Inspectors commonly test whether people are genuinely involved in decisions about their care and whether the service responds to concerns in ways that reduce risk and improve experience. In practice, they will look for:

  • Accessible involvement: people can give feedback in ways that work for them.
  • Responsiveness: concerns lead to timely action and follow-up.
  • Safety and rights: feedback themes about restrictions, dignity, or safeguarding are escalated and addressed.
  • Consistency: records show the approach is routine, not ad hoc.

Being able to evidence co-production around risk assessments, support planning changes, and restrictive practice reduction is often particularly persuasive.

Making it audit-ready without making it bureaucratic

Audit readiness comes from simple, consistent artefacts:

  • Feedback register (date, route, theme, severity, owner, due date, outcome).
  • You said / We did summaries that are accessible and dated.
  • Co-production minutes with attendance, decisions, and actions.
  • Triangulation notes linking feedback themes to incidents, audits, and performance data.

Keep the focus on evidence of change. If a theme repeats, your system should show what you tried, what you measured, and what you are doing next.