Menu Menu

Students are using AI language tools to ace assignments

In a world where AI can be used to do almost anything, students are using language algorithms to write their assignments and are circumventing plagiarism software.

As a 14-year-old, I remember attempting to blag my French language homework by using translation tools on Google. My poor attempt failed to fool the teacher, and I promptly found myself in detention.

Despite my terrible execution all those years ago, however, it appears I may have been ahead of the curve.

With the ongoing boom in AI technology, which remains seriously unregulated by the way, students are being awarded distinctions for homework and course assignments constructed entirely by computer algorithms.

We regularly discuss the constant advancements in text-to-image generators like DALL-E, but the emergence of AI for literary work has gone somewhat under the radar. According to a recent inquest from Motherboard, that’s cause for concern for educational institutions.

I mean, who’s to say you’re not reading an article written by a machine right now?


How do language AI tools work?

Until very recently I was completely unaware that OpenAI – the developer of DALL-E – even had algorithms to create original language. Achieving this, it turns out, is no more difficult than creating absurd images.

In a matter of seconds, the program dubbed GPT-3 can take user prompts and create detailed paragraphs of information spliced together from across the web.

A drop down menu helps the technology lean into a certain discipline and format of text. For instance, selecting ‘Question’ will provide straightforward answers including key contextual touchpoints, and ‘Debate’ generates original sentences in a more conversational tone.

With the right prompts for both style and information, extended responses can be produced from scratch including entire essays. As you can imagine, this has the potential to open a serious can of worms for schools, colleges, and universities.

As of right now, unearthing any attempt at subversion is nigh-on impossible too, given plagiarism software can only detect instances of repeated phrases or sentences.

While longer-form instances of cheating may stick out to the trained eye, the disparity between plagiarism tech and AI will only widen with the release of the GTP-4 – reportedly trained on 100 trillion machine learning parameters.

The end of busy work?

As we mentioned earlier, several students have already given text AI rave reviews in a conversation with Motherboard.

One anonymous student going by the Reddit alias ‘innovate_rye’ revealed they’re using GTP-3 to complete most homework assignments.

‘For Biology, we would learn about biotech and write five good and bad things. I would send a prompt to the AI like: “what are five good and bad things about biotech?” and it would generate an answer that would get me an A,’ they said.

‘I still do my homework on things I need to learn to pass, I just use AI to handle the things I don’t want to do or find meaningless,’ innovate_rye added.

Screenshotted and posted on Twitter by Peter Yang, a student on Reddit recently claimed that they made $100 profit by completing homework for classmates over a two week period. Little did they know, they could have used the AI themselves.

Elsewhere, an unnamed high school senior generated an entire essay on contemporary world affairs. Failing to ace the assignment, predominantly because they hadn’t cited outside sources, they did manage to avoid detection.

It’s too early to refer to computerised literature as an epidemic, but it just feels as though something potentially sticky is brewing here.

The safeguards aren’t there to ensure students are honourable and likely won’t be anytime soon, but the onus of responsibility could be put on AI developers (theoretically) to prevent nefarious uses.

At present, there are no signs of that happening. All I can do is plead… please be better than I kids.

Accessibility