Back to Insights
Research Analysis17 February 2026

AI Transcription Risks in Social Work: What the Research Shows

In February 2026, The Guardian reported on research by the Ada Lovelace Institute and the University of Oxford that found AI transcription tools used in social work produced potentially harmful errors in official care records.

Key Findings

The research examined AI tools that record and transcribe conversations between social workers and service users, then generate summaries for case files. The findings were significant:

  • One tool hallucinated suicidal ideation that was never mentioned in the original conversation, inserting a fabricated mental health diagnosis into an official record.
  • Transcriptions produced gibberish and nonsensical phrases including references to "fishfingers or flies or trees" in place of actual dialogue.
  • Social workers were found to be spending only 2-5 minutes checking AI outputs before adding them to case files, raising questions about adequate human oversight.
  • The research identified potentially harmful misrepresentations entering official care records that could affect decisions about vulnerable people.

Why This Happens

These errors are inherent to how generative AI transcription works. The AI listens to audio, interprets what it hears, and generates new text based on its interpretation. At each stage, errors can be introduced:

  • Transcription errors from accents, dialects, background noise, or overlapping speakers.
  • Hallucinations where the AI generates content that was never spoken, filling gaps in its understanding with plausible but fabricated information.
  • Summarisation bias where the AI decides what is important and what to omit, potentially removing critical context.

The Structuring Alternative

The distinction between AI that generates content and AI that structures existing content is critical in regulated environments. Structuring tools take the practitioner's own written notes and organise them into professional formats. Because the practitioner provides all the content, the tool cannot hallucinate, fabricate, or misinterpret.

This approach eliminates the specific risks identified in the research: no audio recording means no transcription errors, no content generation means no hallucinations, and no summarisation means no bias in what is included or excluded.

Implications for Practice

BASW has called for regulator guidance on AI tool use in social work. Until such guidance is published, organisations adopting AI documentation tools should consider whether those tools generate content or structure it, and what safeguards exist when AI-produced text enters official records about vulnerable people.

Sources: Robert Booth (2026) "AI tools make potentially harmful errors in social work records, research says" The Guardian, 11 February 2026; Ada Lovelace Institute / University of Oxford research.