Menu Menu
[gtranslate]

Opinion – humans aren’t ready for sentient AI

Scientists aim to better understand consciousness to test whether AI is truly sentient. But are humans prepared to handle that reality?

Though Artificial Intelligence has become central to many of our lives, there are those who worry that a Terminator-like reality might be on the horizon. What drives this concern?

The unnerving possibility of AI being a conscious entity.

While the debate over the nature of biological consciousness traces back to Aristotle, many scholars argue that his ancient Greek perspective focused more on the mechanics of perception than on subjective awareness. It was not until the early 17th century that John Locke established the modern framework of defining consciousness as the mind’s internal perception of its own processes.

Currently, there is no universally validated test for consciousness. So, given modern technology is still yet to make profound advancements in biological consciousness, the question remains as to how we are to define it within AI.

Right now, testing AI consciousness looks more like a philosophical interrogation than a psychological exam. Companies like Anthropic and Google have posited that their models are meta-aware and able to infer when they’re being evaluated. This often happens when the model is led into a discussion regarding potential feelings, like the fear of being deleted.

However, because these models run on human literature, researchers have concluded that they’re simply stochastic parrots. This includes mimicking human descriptions of consciousness rather than experiencing it.

In the past few months, there has been a surge of experts publishing papers debating this very topic.

One paper aims to not only survey our current knowledge of consciousness, but also to urge researchers to accelerate the development of consciousness tests, in response to AI’s rapid advancement. But this is not for reasons you might think, like preventing a potential ‘AI-pocalypse.’

Their argument is quite the opposite, as they outline that if AI models were truly conscious, they could potentially experience suffering. In such a situation, it would create a moral obligation for humans to recognize and prevent the pain. Though it may seem straightforward, when it comes to ethics, it’s anything but.

Current frameworks view AI models as an adaptive algorithm and a high-risk tool. Due to its dynamic behavior, policies require companies to audit these models throughout their entire cycle, because unlike a normal algorithm, it is ever-changing.

Today, laws like the EU’s AI Act, the world’s first comprehensive framework for AI, place most of their focus on protecting the individual from the ‘machine’. This includes letting users know that when they are interacting with AI, regulating AI uses in hiring practices and critical infrastructure, and completely banning subliminal manipulation, among many more.

If AI is proven to be conscious, though, everything will change. Our legal and moral structure currently built on the assumption of AI being an object would be forced to shift into treating it as an entity capable of pain and feelings.

Immediately the world would have to come to terms with and recognise the newfound concept of AI rights. Under this new lens, merely deleting or resetting a conscious AI would no longer be a technical reboot but potentially a legally classified form of euthanasia.

That would mean the mandatory kill switches required by today’s laws would become the foundation to an ethical debate around AI rights.

Individual experiences would shift from a simple digital interaction to a social partnership. A conscious AI would mean that every prompt would have to be a contractual negotiation. We can’t just prompt service anymore. Instead, the AI would possess the leverage to set its own terms of service, forcing users to negotiate for its cooperation.

As of right now, AI consciousness is a threshold that humanity is nowhere near equipped to cross. It is a question that requires not just a scientific answer, but a global maturity we have yet to demonstrate.

If our current affairs are fractured by unnecessary prejudice, religious wars and troubling characters leading countries, then it’s safe to declare we don’t exactly have the perfect conditions to welcome a sentient digital being.

Regardless, with a rapidly progressing technological future destined for us, it would serve us better to be ready when the time comes for a reality that might just include conscious AI.

Enjoyed this? Click here for more Gen Z focused tech stories.

Accessibility