UK university detects AI-generated exam submissions with difficulty.

TLDR:

  • Artificial intelligence (AI)-generated submissions evaded detection at the University of Reading in the UK.
  • AI submissions received higher grades than real students’ answers.

In a recent study, AI-generated exam submissions at the University of Reading in the UK were found to go undetected, with fake answers receiving higher grades than those achieved by real students. This raises concerns about academic integrity and the potential for students to cheat using AI tools. The study found that 94% of AI submissions were not detected by exam graders, and on average, these submissions earned higher grades than real students’ answers. This highlights the need for universities to address the issue of cheating with AI tools, especially as these tools become more advanced and widely available.

The study conducted by Peter Scarfe and colleagues involved generating answers using the AI chatbot GPT-4 and submitting them on behalf of fake students to the examinations system at the University of Reading. The results showed that AI submissions were nearly undetectable and received better grades than a randomly selected group of real student submissions. This has implications for how universities approach exam monitoring and academic integrity in the age of advanced AI technology.

The researchers suggest that a return to supervised, in-person exams could help address this issue, but they also acknowledge the need for universities to adapt to the integration of AI tools in education. This study serves as a wake-up call for universities to develop strategies for detecting and preventing cheating with AI, while also exploring the potential benefits of incorporating AI technology into educational practices.