An investigation by The Guardian has found that over 7,000 students have been discovered using AI tools such as ChatGPT for essay work. Experts believe this might be only a fraction of the true number.
AI is back in the headlines this week, as a new investigation by The Guardian has found thousands of students to be using AI tools such as ChatGPT to cheat on essay work.
The rate of students using generative AI has steadily increased over the past few years, while traditional forms of academic violation have fallen. This includes plagiarism.
According to the report and investigation by The Guardian, a survey of academic integrity violations discovered nearly 7,000 proven examples of cheating via AI tools in 2023/24, or 5.1 cases every 1,000 students.
This was an increase from 1.6 cases per 1,000 in the previous year, a substantial hike upward. Current predictions also estimate that numbers will rise to 7.5 cases per 1,000 this year.
Even with all that data, experts believe these figures are only skimming the surface. Students could be using ChatGPT and other AI tools without being detected, posing sizeable new challenges for academic institutions when it comes to accurately detecting and punishing work that breaks the rules.
In 2019/20, plagiarism reportedly accounted for almost 65% of all academic misconduct, with these numbers increasing as work transitioned to online spaces as a result of lockdowns.
However, as AI became more mainstream and widely used, less students were copying others and instead creating their own text using prompts. This is cheating, but is not technically copying anyone.
The Guardian says it reached out to 155 universities, with over one quarter of those that responded saying they did not even consider AI misuse to be its own unique category of misconduct in the last academic year.
This is troubling considering that a recent survey by the Higher Education Policy Institute in February found that a staggering 88% of students used AI in some capacity for their assessments.
As of right now, it’s essentially impossible to fully crack down on the use of AI. We do not yet have accurate or reliable enough tools to detect all cases of AI without falsely accusing students or missing guilty individuals entirely.
In addition, there is a grey moral area surrounding the ethical use of AI.