Scientists aim to better understand consciousness to test whether AI is truly sentient. But are humans prepared to handle that reality?
Though Artificial Intelligence has become central to many of our lives, there are those who worry that a Terminator-like reality might be on the horizon. What drives this concern?
The unnerving possibility of AI being a conscious entity.
While the debate over the nature of biological consciousness traces back to Aristotle, many scholars argue that his ancient Greek perspective focused more on the mechanics of perception than on subjective awareness. It was not until the early 17th century that John Locke established the modern framework of defining consciousness as the mind’s internal perception of its own processes.
Currently, there is no universally validated test for consciousness. So, given modern technology is still yet to make profound advancements in biological consciousness, the question remains as to how we are to define it within AI.
Right now, testing AI consciousness looks more like a philosophical interrogation than a psychological exam. Companies like Anthropic and Google have posited that their models are meta-aware and able to infer when they’re being evaluated. This often happens when the model is led into a discussion regarding potential feelings, like the fear of being deleted.
However, because these models run on human literature, researchers have concluded that they’re simply stochastic parrots. This includes mimicking human descriptions of consciousness rather than experiencing it.
In the past few months, there has been a surge of experts publishing papers debating this very topic.
One paper aims to not only survey our current knowledge of consciousness, but also to urge researchers to accelerate the development of consciousness tests, in response to AI’s rapid advancement. But this is not for reasons you might think, like preventing a potential ‘AI-pocalypse.’
Their argument is quite the opposite, as they outline that if AI models were truly conscious, they could potentially experience suffering. In such a situation, it would create a moral obligation for humans to recognize and prevent the pain. Though it may seem straightforward, when it comes to ethics, it’s anything but.
Current frameworks view AI models as an adaptive algorithm and a high-risk tool. Due to its dynamic behavior, policies require companies to audit these models throughout their entire cycle, because unlike a normal algorithm, it is ever-changing.
Today, laws like the EU’s AI Act, the world’s first comprehensive framework for AI, place most of their focus on protecting the individual from the ‘machine’. This includes letting users know that when they are interacting with AI, regulating AI uses in hiring practices and critical infrastructure, and completely banning subliminal manipulation, among many more.
If AI is proven to be conscious, though, everything will change. Our legal and moral structure currently built on the assumption of AI being an object would be forced to shift into treating it as an entity capable of pain and feelings.




