Responding to Deepfake and Impersonation Scams: A Digital Safeguarding Playbook
Technology-enabled harm is evolving. Impersonation scams that once relied on poor spelling and obvious pressure tactics are now supported by AI-generated voice notes, spoofed numbers, and “deepfake” video clips designed to trigger panic and fast decisions. In adult social care, the impact can be immediate: coerced money transfers, unsafe travel plans, sharing of personal data, or escalating distress that affects daily functioning. Providers need an operational playbook that is practical on shift, defensible in governance, and rooted in least restrictive practice.
For supporting resources and connected practice areas, use the Digital Safeguarding & Risk materials alongside Digital Care Planning guidance to ensure risk management is embedded in everyday support planning.
What deepfake and impersonation scams look like in adult social care
In practice, scams often present as “urgent contact” where the person believes they are responding to someone real — a family member, a support worker, a bank, a courier, or even the local authority. Typical features include:
- Speed pressure (“do it now or something bad happens”).
- Authority cues (official logos, familiar names, staff-like language).
- Emotional triggers (fear, shame, responsibility, “you’re the only one who can help”).
- Isolation tactics (“don’t tell anyone”, “keep this private”).
Deepfake elements can increase believability — a voice note that sounds like a relative, or a video message that appears authentic. The provider response must therefore focus on verification steps and decision support, not simply telling someone “it’s a scam”.
Commissioner expectation
Commissioner expectation: providers have an auditable approach to technology-enabled financial abuse and coercion risk, including clear escalation thresholds, incident reporting, and learning mechanisms. Commissioners will expect evidence that staff know what to do at the point of risk and that governance systems monitor patterns and actions.
Regulator / Inspector expectation
Regulator / Inspector expectation (e.g., CQC): people are protected from abuse and avoidable harm, and staff respond promptly, proportionately and with respect for the person’s rights. Inspectors will look for evidence of safe systems (training, supervision, reporting routes), person-centred planning, and defensible decision-making when restrictions or intensive monitoring are considered.
Core elements of a practical “verification-first” approach
A verification-first approach means the service standardises what staff do when contact is suspicious, without turning every interaction into surveillance. Key elements include:
- Agreed verification routes (known numbers, call-back procedures, trusted contacts list).
- A pause rule (no money transfers, travel decisions, or account changes until verification completed).
- Support to regulate distress before decisions are made (calming strategies, structured conversation).
- Clear thresholds for safeguarding escalation and incident recording.
These steps should be embedded in the person’s plan where risk is known, and available as a service-wide response where risk emerges unexpectedly.
Operational example 1: “Your son needs help” voice note with payment demand
Context: A person receives a voice note that sounds like their adult child saying they have been arrested and need money urgently. The message includes a bank transfer request and asks them not to tell anyone “because it’s embarrassing”. The person becomes distressed and wants to leave immediately to “fix it”.
Support approach: Staff treat this as a potential coercion and financial abuse safeguarding risk. They prioritise emotional regulation, verification, and preventing impulsive decisions while maintaining the person’s dignity.
Day-to-day delivery detail:
- Staff use a calm, structured script: “Let’s slow this down and check it properly. We can help you confirm it’s really them.”
- The plan’s trusted contact list is used: staff support a call-back to the family member using the known number stored in records (not the number in the message).
- While verification is underway, staff support grounding strategies (breathing, sitting with a warm drink, moving to a quieter space) and record observable distress indicators.
- If the person insists on transferring money, staff apply the agreed pause rule and escalate to the manager/safeguarding lead if coercion indicators are present.
How effectiveness/change is evidenced: The scam is identified through call-back verification. The provider evidences that staff prevented financial harm without using blanket restrictions, and captures learning for team briefing and plan update.
Operational example 2: Impersonation of a professional requesting personal data
Context: A person receives a message claiming to be from a “care coordinator” asking for copies of ID and bank details to “complete a benefits review”. The message uses formal language and includes a logo. The person is proud of managing paperwork independently and starts gathering documents.
Support approach: Staff support the person’s independence while introducing safe verification steps and data minimisation. The goal is to avoid embarrassment and preserve trust.
Day-to-day delivery detail:
- Staff validate the person’s intent (“You’re doing the right thing by responding to official requests”) while guiding a verification step: contacting the professional via the known organisational route.
- The service uses an agreed principle: never share ID or bank details via unverified links or messaging.
- Staff support the person to prepare a safe response: “I will call the office number to confirm where to send documents.”
- If the person has already shared information, staff follow an incident pathway: manager notified, advice sought (e.g., bank contact support where appropriate), and safeguarding discussion if exploitation risk is suspected.
How effectiveness/change is evidenced: The provider can evidence timely prevention and, where data was shared, prompt reporting and risk reduction actions. Governance records show that staff used the agreed verification route and updated risk planning accordingly.
Operational example 3: Deepfake video prompting unsafe travel or meeting a stranger
Context: A person receives a short video appearing to show a friend asking them to meet “right now” at a location. The person has a history of vulnerability to exploitation and becomes excited, preparing to leave without support, including late at night.
Support approach: Staff focus on immediate safety, verification, and positive risk-taking: enabling social connection while preventing unsafe situations. The response includes time-limited, clearly recorded steps rather than informal “stopping” measures.
Day-to-day delivery detail:
- Staff apply the plan’s community safety approach: “We can help you meet safely — let’s confirm who it is and plan how you get there.”
- Verification occurs through an alternative channel: call-back to the friend using a known number, or messaging via an established platform account with previous history.
- If verification fails and the person remains determined to go, staff use escalation thresholds: manager approval and a short-term safety plan (accompanied travel, agreed check-ins, alternative safe venue) where appropriate.
- Any restriction (e.g., delaying leaving for a set period) is recorded with rationale, review time, and step-down plan.
How effectiveness/change is evidenced: The provider evidences that the service prevented an unsafe meeting, maintained respect for the person’s wishes, and recorded a defensible rationale for any time-limited restrictive step. Subsequent reviews show whether risk reduced and independence could be increased safely.
Recording and evidence: what must appear in notes to withstand scrutiny
For commissioners and inspectors, the strength of the response is often judged by the clarity of records. Useful evidence includes:
- What the message was (summary of content, method of contact, urgency cues).
- Risk indicators observed (distress, pressure tactics, secrecy requests, requests for money/data).
- Verification actions taken (call-back route, who was contacted, outcome).
- Decision-making rationale (why escalation did or did not occur; proportionality; least restrictive approach).
- Outcome and learning (plan updates, staff briefing points, whether the person’s confidence improved).
Where the person shares information or loses money, documentation should evidence timely response, support to mitigate harm, and safeguarding consideration where exploitation risk is present.
Governance: building a service-wide capability rather than ad-hoc “good practice”
Providers reduce repeat harm when verification-first becomes routine. Practical governance steps include:
- Team training scenarios using realistic scripts (voice notes, spoofed numbers, urgent video messages).
- Short “what to do” flowcharts available at point of care (including escalation thresholds).
- Monthly incident trend review categorising technology-enabled harm and tracking actions taken.
- Supervision prompts exploring staff confidence, boundary decisions, and least restrictive practice dilemmas.
- Care plan audit samples checking that digital risk is reflected where relevant and reviewed after incidents.
Maintaining dignity and rights while reducing harm
The most defensible responses are those that protect people without undermining autonomy. Providers can demonstrate quality by showing that they support people to verify, pause, and decide — and that any restrictive steps are time-limited, reviewed, and reduced when safe. In a rapidly changing digital landscape, the operational discipline is the same: clear thresholds, consistent actions, and evidence that links practice to outcomes.