Contract Continuity in a New Era: Beyond Winning to Keeping Health & Social Care Contracts

The Procurement Act 2023 (now live) has changed public procurement — and with it, how health and social care providers define success. Writing a great bid gets you through the door; keeping the contract long term now depends on how well you evidence delivery, outcomes and social value.

To strengthen this thinking in your bid library, it’s worth revisiting both bid writing principles and tender strategy. The core message is simple: under MAT-style evaluation, the strongest providers don’t just say the right things — they build an evidence system that makes quality and value easy to verify.


Why this shift matters for care providers

For years, the typical cycle looked like this: bid → win → deliver → re-tender. Even excellent providers could lose services to cheaper bids at re-procurement. Under the new regime, commissioners can justify continuity where providers demonstrate the Most Advantageous Tender (MAT) in practice — sustained quality, workforce stability, governance and social impact. That requires a systematic way to present evidence at reviews, framework call-offs and renewal points.

In other words, you’re no longer competing only in “tender year”. You’re competing every quarter — through your assurance, performance reporting, and ability to show a credible line of sight from:

  • need (who you support and why), to
  • delivery (what you do and how you control risk), to
  • outcomes (what changes for people), to
  • value (why the public gets a better return over time).

From MEAT to MAT — what’s really changed

  • Beyond price: MAT puts outcome evidence and public benefit alongside cost. Providers who measure and communicate impact are advantaged.
  • Transparency by default: Commissioners will publish performance and change notices. Your quality story must stand up publicly.
  • Continuity is credible: If you can show consistent evidence, authorities have clear grounds to retain you.
  • Framework-first: More call-offs and fewer open mini-competitions — which raises the bar for in-contract evidence.

Practically, this means “being good” is not enough. You need a repeatable method for proving:

  • control (governance and risk management)
  • consistency (workforce stability and reliable delivery)
  • improvement (learning loops that close actions)
  • impact (outcomes that matter to people and commissioners)
  • public benefit (social value aligned to local priorities)

Why frameworks are the new battleground

Expect more emphasis on frameworks, approved lists and direct call-offs. That means fewer large open competitions, but more evidence-led continuity for trusted providers. The winners will be those who can present a clear, auditable line of sight from inputsactivitiesoutcomes — and show it quarter after quarter.

What this looks like in real procurement behaviour

  • Framework entry matters more: once you’re on, the real competition shifts to call-offs and performance-based trust.
  • Commissioners prefer stability: where they can demonstrate the decision is justified, defensible, and evidence-led.
  • Evidence becomes a commercial asset: the same performance pack supports contract reviews, extensions, spot checks, and future bids.

Build a retention-ready evidence model

  1. Define meaningful KPIs: Prioritise outcome measures, service timeliness, workforce stability, safeguarding and satisfaction. Keep a tight set you can sustain.
  2. Document governance: Supervision, audits, escalation logs and lessons learned are as important as KPIs — they prove control.
  3. Track social value: Jobs, apprenticeships, inclusion, community partnerships, environmental gains — quantify them and align with local priorities.
  4. Create an evidence library: Standardised case studies, performance graphs and testimonials with consent trails. A library speeds up reviews and call-offs.
  5. Adopt a quarterly cadence: Don’t wait for renewal year; produce a rolling Quarterly Performance Report.

The strategic aim is not “more paperwork”. It’s to create a small number of high-trust artefacts that commissioners can rely on repeatedly.


What goes into a “MAT-ready” evidence pack

If you want a simple target: build a pack that a commissioner can read in 20 minutes and then confidently defend the decision to keep working with you. A strong pack usually includes:

1) One-page KPI dashboard

  • Clear KPI definitions (so data cannot be challenged as “unclear”).
  • RAG status and trend (not just the number).
  • Short narrative: what happened, so what, now what.

2) Outcomes evidence (not only activity)

  • 2–3 outcome indicators that show change over time (not a one-off).
  • Examples tied to specific cohorts (reablement, discharge, LD/autism transitions, dementia, complex care).
  • Where possible, baseline → intervention → outcome.

3) Governance proof

  • Audit schedule and last quarter’s audit summary (meds, records, visit punctuality, care plan quality).
  • Training compliance snapshot (mandatory + role-specific competence sign-off).
  • Supervision cadence and themes (what you learned and changed).
  • Incident learning loop: theme → action → re-audit.

4) Social value report aligned to local priorities

  • Outputs (jobs, apprenticeships, volunteering hours, local spend).
  • Outcomes (retention, progression, sustained employment, community inclusion).
  • Mapping to the local plan (so commissioners can lift it straight into their reporting).

5) Case studies with consent trail

  • Short, outcome-led cases (half page each).
  • Include the person’s goals, what changed, and the measurable results.
  • Document consent and anonymisation approach (especially if shared externally).

Define meaningful KPIs without building a data warehouse

Many providers overcomplicate KPIs. Commissioners tend to favour a smaller set that is consistent, reliable, and clearly linked to contract outcomes. A pragmatic approach is to use five KPI “families” and choose 1–3 metrics in each:

1) Delivery reliability

  • On-time visit rate / missed visits / no-access outcomes.
  • Referral-to-start times (especially for discharge and urgent response).
  • Continuity indicator (e.g., known worker % or micro-team coverage where monitored).

2) Safety and safeguarding

  • Safeguarding concerns raised (with learning themes and action closure).
  • Medication errors (rate + corrective actions + re-audit results).
  • Falls / pressure damage / infection control themes (where relevant to service type).

3) Outcomes and independence

  • Reablement step-down rate at 14/28 days.
  • Independence goals achieved (tracked through care planning outcomes).
  • Admission avoidance / reduced readmissions (where data sharing allows).

4) Experience

  • Satisfaction scores (simple, repeatable method).
  • Complaints themes and resolution time.
  • “You said, we did” evidence from lived experience feedback.

5) Workforce stability

  • Turnover, vacancy rate, sickness/absence trends.
  • Training compliance and competence sign-off.
  • Supervision completion and themes.

Tip: Whatever you choose, keep it stable for at least 12 months. Commissioners trust trends — not constantly changing dashboards.


Document governance in a way that proves control

In MAT terms, governance is not “nice to have”. It is the evidence that you can manage risk and deliver consistently under pressure. The most persuasive governance evidence is:

  • operational (what happens week to week)
  • auditable (you can show it happened)
  • closed-loop (actions get completed and checked).

Governance artefacts commissioners recognise quickly

  • Audit plan with dates, owners, findings summary, actions, and re-check dates.
  • Supervision matrix showing cadence, completion, and competency themes.
  • Risk register with escalation thresholds and mitigation actions.
  • Incident learning log (trend themes, changes made, evidence of improvement).
  • Quality meeting minutes with decisions and tracked actions.

When you present governance like this, it becomes “decision support” for commissioners — a credible basis for continuity.


Track social value in a way that survives scrutiny

Social value is often where providers lose easy marks because they rely on vague commitments. Under a transparency-led regime, social value claims need to be measurable and mapped.

Build a simple social value log

  • Commitment (what you promised in the tender or contract).
  • Metric (how you measure delivery).
  • Progress (quarterly totals + narrative).
  • Local alignment (which local priority it supports).

Examples of “good” social value evidence

  • Local recruitment pipeline: number hired, retention at 3/6/12 months, progression into senior roles.
  • Apprenticeships: starts, completions, sustained employment.
  • Community partnerships: measurable outputs (sessions delivered, referrals supported, outcomes achieved).
  • Environmental improvements: verified reductions (mileage optimisation, paper reduction, recycling rates) where tracked.

Social value becomes a retention asset when it is easy to evidence and clearly linked to what the local system cares about.


A practical example

A domiciliary care provider joins a framework in 2025. Over 12 months they implement a disciplined reporting rhythm: concise KPI dashboards, two outcome-led case studies per quarter, a supervision matrix with escalation logs, and a short social value update. At the 18-month review, they present a single, coherent pack. The authority uses the evidence to justify a call-off and later an extension. The difference wasn’t “better promises” — it was better proof.


Commissioners’ 2025–26 wish list

  • Simple KPI frameworks with definitions and reliable data trails.
  • Outcome-based case studies — measurable progress, not anecdotes.
  • Governance evidence — training, supervision, incident learning, audits.
  • Social value quantified and mapped to the local plan.
  • Language that mirrors MAT and transparency duties.

Common pitfalls to avoid

  • Generic narrative: Commissioners can spot “filler”. Anchor every claim to data or documents.
  • Data without meaning: Add short analysis (“what, so what, now what”).
  • Missing attachments: If the text references audits or training compliance, attach them.
  • Weak social value: Replace statements with metrics (e.g., “12 apprenticeships; 250 volunteering hours; 18 ESG projects”).
  • Year-end panic: Quarterly cadence beats last-minute compilation.
  • Unclear definitions: If you measure “missed visits”, define what counts and what doesn’t.
  • Overclaiming: Promising “zero” incidents or “100%” perfection weakens credibility. Show control and learning instead.

Action checklist (print and pin)

  1. Audit one live contract against MAT (KPIs, governance, social value, case studies).
  2. Set a 90-day plan to fix two evidence gaps with the highest commissioning impact.
  3. Standardise a one-page KPI dashboard and a two-page narrative template.
  4. Collect two new, consented case studies with measurable outcomes each quarter.
  5. Embed a social value log; report quarterly against local priorities.
  6. Book a commissioner touchpoint to walk through the evidence — don’t wait for renewal.

A simple quarterly cadence you can sustain

“Cadence” is what separates providers who are always ready from providers who panic at review time. A workable rhythm looks like this:

  • Monthly: dashboard refresh, audit summary, workforce snapshot, social value log update.
  • Quarterly: publish a short performance report (dashboard + narrative + two case studies + social value).
  • Quarterly: commissioner touchpoint (walk through trends, risks, actions, and next quarter priorities).
  • Annually: consolidated report that reuses the quarterly packs (no reinvention).

When a call-off opportunity or a renewal conversation appears, you’re already ready — because the evidence exists.


Start small if you're time-poor

If you’re a smaller provider without a performance team, begin with five KPIs: service stability, outcomes achieved, satisfaction, staff retention, safeguarding incidents. Add two case studies per quarter, one-page dashboard, and a simple social value log. This minimal set — kept consistent — will dramatically improve review conversations.

Minimum viable evidence pack (for smaller providers)

  • 1-page dashboard (5 KPIs + RAG + trend + short narrative).
  • 1-page governance summary (audits completed, supervision cadence, training compliance).
  • 2 short case studies (outcome-led, anonymised, consented).
  • 1-page social value log (3–5 commitments with numbers).

Looking ahead — fewer tenders, more continuity

As frameworks mature, expect fewer open tenders and more evidence-led extensions. Commissioners prefer stability when providers demonstrate impact and control. That’s good news for teams who can produce clear, triangulated assurance — and a warning for those who rely on generic text.


Turn delivery into proof

In this environment, your competitive edge is not louder promises; it’s cleaner evidence. Link CQC findings, audits, training, supervision and lived experience to a stable KPI set. Then present it in a format that’s quick to read and easy to verify — the essence of a Most Advantageous Tender in practice.


Key takeaway

The Procurement Act 2023 rewards transparency and proof. Providers who invest in outcomes evidence and governance can expect fewer disruptive re-tenders and more continuity.