Back to Insights
AI in Education30 March 2026

AI Assisted Marking at Universities: What It Means for Students

Jisc is piloting AI assisted marking at UK universities. Students, meanwhile, are warned against over-reliance on AI in assessed work. Is this a contradiction or a sign that higher education is beginning to grapple seriously with what AI should and should not do?

Recent reporting in Times Higher Education (March 27, 2026) has surfaced a tension that many in higher education will find both unsurprising and uncomfortable.

Jisc the non-profit organisation that supports UK universities with their digital infrastructure is currently piloting AI assisted marking and feedback software at ten universities. The model is cautious by design: AI generates draft marks and feedback, which are then reviewed and approved by academics before being returned to students. The stated aim, according to Jisc's senior AI specialist Tom Moule, is to reduce marking workloads, not replace academic judgement. Full integration of AI into summative assessments, he notes, is still some way off.

The apparent contradiction, however, is immediate.

Universities warn students against over-reliance on AI in assessed work. Academic integrity policies emphasise independent thinking, authentic authorship, and the primacy of the student's own reasoning. Yet those same institutions are now exploring whether AI can assist in the evaluation of that reasoning.

The question writes itself: is this hypocrisy?

The Asymmetry Problem

The discomfort is real and the charge deserves a serious response rather than dismissal.

There is an asymmetry at work. Students are held to a standard demonstrate your own thinking while institutions explore how technology can assist in evaluating that thinking. On the surface, this looks inconsistent.

But the asymmetry only constitutes hypocrisy if we assume the two roles are equivalent. They are not.

Assessment has always involved tools and processes that support but do not replace academic judgement: marking rubrics, moderation systems, feedback templates, examiner standardisation processes. These have never been considered compromises to academic integrity, because the judgement remains with the academic. The tool supports the process. It does not make the decision.

In the Jisc trial, the same logic applies. Academics remain responsible for marks. AI assists with structuring and drafting feedback. The professional judgement the evaluation of whether a student has demonstrated understanding, met learning outcomes, engaged critically with material remains human. What changes is the administrative scaffolding around that judgement.

This mirrors a position that is emerging across the professions. In healthcare, social care, legal practice, and professional education, AI is increasingly accepted where it supports process and structure, but not where it substitutes professional judgement. The distinction is not semantic. It has governance implications, professional accountability implications, and in regulated environments legal implications.

Where the Real Tension Lies

The more substantive issue is not who uses AI, but what role AI is permitted to play in cognitive and professional work.

There is a meaningful distinction between AI that replaces thinking and AI that supports the expression or evaluation of thinking. The critic quoted in Times Higher Education who argues that marking and assessment is integral to the learning experience, and taking time over it is part of our obligation to students is not wrong. Assessment is not merely administrative. It is an act of disciplinary judgement, a form of pedagogical engagement, and a professional responsibility. The concern that AI use in marking may become unexamined or unbounded is legitimate.

But this is an argument for governance, not prohibition.

The risk is not that AI exists in the assessment environment. It already does in plagiarism detection software, in essay structuring tools that students use with or without institutional knowledge, in the feedback generation tools that are proliferating across EdTech platforms. The risk is that AI's role becomes undefined, unacknowledged, and therefore unaccountable.

What This Means for Students

If universities adopt AI assisted marking at scale, they will need to confront a question they have largely avoided: what forms of AI use by students are genuinely incompatible with learning and which are not?

A blanket prohibition becomes increasingly difficult to sustain when staff are using AI to structure feedback and institutions are openly acknowledging efficiency gains from AI assistance. The credibility of that prohibition depends on a coherent account of why AI use is problematic in one context but acceptable in another.

The more defensible position and the one that the Jisc trial is, perhaps inadvertently, beginning to force is a distinction based on the nature of the cognitive work involved. This mirrors the core principle of our Professional AI Documentation Standards (PAIDS) framework: AI should be restricted where it replaces the learning that assessed work is designed to produce.

A student whose reflective portfolio is written by AI has not developed reflective practice. A student whose essay argument is generated by AI has not developed the capacity for disciplinary reasoning. In both cases, the document may pass the assessment. The learning does not happen.

AI may legitimately support where it assists with organisation, structure, clarity, or format provided the thinking behind that structure remains the student's own.

This is not a lowering of academic standards. It is a clarification of what those standards are actually for.

What This Means for Academics and Institutions

The Jisc trial is being introduced carefully, with appropriate safeguards and human oversight. But the governance question it raises extends beyond marking.

Universities that are serious about AI in education need frameworks that go beyond reactive policy frameworks that define, with some precision, what AI may and may not do in the academic environment, and why. Not because AI is inherently problematic, but because the absence of definition creates the conditions for the asymmetry and the charge of hypocrisy to persist.

The sector is in transition. Universities are moving, however unevenly, from reactive restrictions on student AI use toward more structured, role-specific integration of AI across teaching, learning, and assessment. This transition will be uneven. Institutions will move at different speeds, adopt different standards, and reach different conclusions. The Jisc trial illustrates that fragmentation. Yet this inconsistency is not just a risk it is also an opportunity to learn what works, what fails, and why.

If universities can define clearly what AI may support and what it may not replace across both student and staff roles the apparent contradiction dissolves. What remains is a more coherent, more honest, and ultimately more durable account of what higher education is for, and what the tools that support it are permitted to do.

The question was never whether AI would enter the academic environment. It already has. The question is whether institutions will govern its presence thoughtfully or continue to be caught by it.

The PAIDS Framework for AI Documentation

Our Professional AI Documentation Standards (PAIDS) framework provides a governance structure for AI use in professional and educational contexts distinguishing between AI that supports and AI that replaces human judgement.

Explore the PAIDS Framework