Menu Menu
[gtranslate]

Should we be using AI powered tools in the academic environment?

AI can be a great tool, but what happens when these tools are not well trained?

In recent months, a series of new tools that use Artificial Intelligence have emerged, including ChatGPT and Gemini. 

Each one of them has been marketed as having the ability to make our lives easier. Whether it’s creating essays for us in seconds, locating reputable sources of information, or even creating illustrations for us – AI is posed to streamline all sorts of activities. 

It’s no secret that AI software has already been used in academia for a number of years. The widely-used platform Turnitin is touted as a tool that “safeguards academic integrity” and “helps develop students’ original thinking skills with high-quality, actionable feedback that fits easily into teachers’ existing workflows.”

But what would happen if Turnitin’s AI tool wrongly identified students’ writing as plagiarism or even AI-generated due to the student’s use of a large vocabulary?

What if it raised the red flag when a student wrote to a consistently high level throughout a paper?

Watch on TikTok

Well, we no longer have to imagine it, since a number of students have started receiving failing grades and have even been accused of using either AI or committing plagiarism by Turnitin. 

Multiple university students have filed complaints after being redirected to the misconduct departments of their faculties after these plagiarism tools wrongly identified their work as AI.

It turns out that programs designed to help maintain academic integrity are actually failing to do so. 

Watch on TikTok

It has even been reported that the simple use of Grammarly to support punctuation and grammar will cause Turnitin to flag the use of AI in essay submissions.

In this case, it doesn’t matter if a student wrote the whole paper – the use of one tool to fix a typo could see them falsely accused of plagiarism or AI-generated work.

Other students have run their past papers written before ChatGPT and other AI tools were accessible. They, too, got a mark of 60 percent AI use.

The situation has become so serious that students have even been denied the opportunity to prove their innocence.

It’s a growing problem that is occurring in various institutions around the world, causing students to make YouTube videos explaining what to do if you are being falsely accused of using AI. 

More recently, students have created petition on change.org to stop these platforms from being able to use AI to call out academic misconduct. 

Accessibility