Gender Bias in AI-Generated Care Assessments
In August 2025, The Guardian reported on research by the London School of Economics that found AI tools used by English councils introduced systematic gender bias into care assessments. The study revealed that AI-generated summaries described identical needs differently depending on whether the service user was male or female.
Key Findings
- When summarising case notes about men, AI tools described their needs as "complex" and characterised them as "unable" to manage independently.
- For identical needs in women, the same tools described them as "independent" and "able to manage".
- This systematic bias could lead to women receiving less support than men with the same care needs, because the AI-generated assessment language downplayed the severity of their conditions.
- The bias was embedded in the AI's language model, not in the original case notes written by practitioners.
How AI Summarisation Introduces Bias
When an AI tool summarises case notes, it does not simply shorten the text. It interprets the content and generates new language to describe it. This generation process draws on patterns in the data the AI was trained on, which may reflect historical biases in how men and women's health needs have been documented.
The result is that even when a social worker writes identical assessments for male and female service users, the AI-generated summary can produce systematically different language that affects how needs are perceived and how resources are allocated.
The Risk to Equalities Obligations
Local authorities have statutory obligations under the Equality Act 2010 and the Public Sector Equality Duty. If AI tools systematically generate biased language in care assessments, councils may be unknowingly breaching these obligations at scale. The bias is particularly difficult to detect because it operates at the level of word choice rather than explicit discrimination.
Preserving Professional Language
The fundamental issue is that AI tools which generate new text introduce risks that do not exist when the practitioner's own language is preserved. A structuring approach — where the AI organises existing text without rewriting it — cannot introduce gender bias because it does not choose replacement words. The practitioner's assessment language passes through unchanged, simply reorganised into a professional format.
For councils subject to equalities obligations, the distinction between tools that rewrite practitioner language and tools that preserve it is directly relevant to compliance.
Sources: Jessica Murray (2025) "AI tools used by English councils downplay women's health issues, study finds" — The Guardian, 11 August 2025; LSE research.