Thousands of students across UK universities have been caught using artificial intelligence tools such as ChatGPT to cheat on academic work, marking a significant shift in student misconduct patterns, according to an investigation by The Guardian.
Figures from the 2023–24 academic year reveal that nearly 7,000 confirmed cases of AI-related cheating were recorded—equating to 5.1 cases per 1,000 students.
This is a sharp rise from 1.6 per 1,000 the previous year, with projections suggesting this could rise further to 7.5 per 1,000 by the end of the current academic year.
While cases of traditional plagiarism are steadily falling, university authorities are now facing a fresh and complex challenge: adapting assessment methods in response to rapidly advancing generative AI technologies.
Prior to the proliferation of tools like ChatGPT, plagiarism accounted for nearly two-thirds of all academic misconduct in 2019–20.
During the pandemic, as exams shifted online, instances of plagiarism surged. But the tide has now turned.
The number of confirmed traditional plagiarism cases fell from 19 per 1,000 students to 15.2 in 2023–24, and early indications suggest this will drop further to around 8.5 per 1,000.
The Guardian submitted Freedom of Information requests to 155 UK universities, of which 131 responded with partial or full data on academic misconduct over the past five years.
However, more than a quarter of these institutions had not yet started recording AI-related offences as a separate category in 2023–24, highlighting how the sector is still catching up with the scale and complexity of AI misuse.
Experts warn that the actual number of students using AI dishonestly may be far higher. A 2024 survey by the Higher Education Policy Institute revealed that 88% of students had used AI tools for assessments. Meanwhile, researchers at the University of Reading found that AI-generated assignments went undetected 94% of the time in their internal trials.
Dr Peter Scarfe, associate professor of psychology at Reading and co-author of the study, noted: “Those caught are likely just the tip of the iceberg. Unlike plagiarism, where copied material can be directly verified, detecting AI-generated content is notoriously difficult—especially without wrongly accusing students.”
He added that shifting every assessment to in-person exams is “not feasible,” acknowledging the inevitability that students will continue to use AI, often without detection, even if explicitly prohibited.
The ease of access to AI tools is also fuelling the issue. The Guardian found numerous TikTok videos promoting AI essay writers and paraphrasing tools designed to bypass university AI detectors by mimicking human writing styles.
Dr Thomas Lancaster, a specialist in academic integrity at Imperial College London, said: “If students use AI wisely and edit the output, it becomes almost impossible to prove misuse. Still, my hope is they are genuinely learning through this process.”
The Department for Education stated it is investing over £187 million into national skills programmes and has issued guidance for the responsible use of AI in education.
A UK government spokesperson said: “Generative AI presents an exciting opportunity to revolutionise teaching and learning. However, universities must adopt thoughtful strategies to harness AI’s benefits while managing its risks, ensuring students are equipped for the jobs of tomorrow.”
