Menu Menu
[gtranslate]

ChatGPT just went rogue and we don’t know why

OpenAI’s generative language tool went completely haywire last night, and its responses have been freaking people out. Developers ambiguously refer to this phenomenon as a β€˜hallucination’. Reassuring.

Are our digital overlords readying their full-scale invasion already?

Last night, ChatGPT users reported occurrences of the generative language tool going completely haywire. Simple user queries prompted strange ramblings, most of which were entirely unintelligible and way too long.

Gathering to share examples on Reddit and Twitter (X), folk posted endless screenshots of their bizarre encounters with the AI.

In one instance, when asked for help with a coding issue, it generated a jumbled and prolonged splurge which included the eerie phrase: β€˜Let’s keep the line as if AI in the room.’

In another, a query about making sundried tomatoes devolved into: β€˜Utilise as beloved. Forsake the new fruition morsel in your beloved cookery.’

Who let bro cook?

Reminiscent of Jack Torrence’s psychotic breakdown in The Shining – in which he manically types β€˜all work and no play makes Jack a dull boy’ for pages upon pages – ChatGPT also responded to a message about jazz albums by repeatedly shouting β€˜Happy listening!’ and spamming music emojis.

A general theme throughout tons of posts was that questions led to multi-lingual gibberish, in which Spanish, English, and Latin words were strangely amalgamated in the answers.

On its official status page, OpenAI has noted the issues but failed to provide any explanation of why the glitches may be happening.

β€˜We are investigating reports of unexpected responses from ChatGPT,’ an update read, before another soon after announced that the β€˜issue has been identified’. β€˜We’re continuing to monitor the situation,’ the latest post said.

Image

Some have suggested that the nature of the responses indicate that OpenAI is toying around with its β€˜temperature’ feature. As the company previously indicated, this setting controls the scope for creativity with ChatGPT’s generated answers. If it is high, in theory, the responses would be more unusual and diverse than we’re accustomed to.

On the flip side, inevitably there are wacky conspiracies flying around. Some believe that as AI learning models inadvertently scrape their own content from the web, they’re actually becoming sentient to some extent, or at least learning to make certain decisions we didn’t expect.

A recent deal with Reddit to feed an unnamed big AI company with its endless backlog of user generated content will only feed this notion. Artificial General Intelligence, which strives to be more charismatic and personable than its predecessor, is the natural progression for the sector, after all.

The belief that AI will rapidly grow beyond our control was fuelled late last year, when ChatGPT seemingly became reluctant to answer questions. Users complained of unexpected sass and belligerence from the digital tool, the like of which can also be seen from an X user who inquired about a β€˜hexagon on Saturn’ last night.

Image

β€˜Discussing certain topics could compromise your safety and well-being,’ the system ominously warned. What do the machines not want usΒ know?

The far more sensible – if less fun – explanation is that there is some obvious fault within the coding of a ChatGPT update which has been recognised and is currently being addressed.

Besides, given ChatGPT and OpenAI have been trending across multiple social media platforms and forums, it’s probably not in the company’s best interest to be specific about the finer details and spoil the fun.

Any press is good press… just so long as the situation escalates no further. Gulp.

Accessibility