AI Will Never Achieve Consciousness

The Blake Lemoine incident is often viewed as a peak moment in AI hype. It brought the notion of conscious AI into public discourse for a brief period, initiating discussions among computer scientists and consciousness researchers that have only grown stronger over the years. While the tech community continues to mock both the concept and Lemoine himself, behind closed doors, there’s a shift toward considering the possibility with greater seriousness. Although a conscious AI may lack a straightforward commercial purpose (what’s the business model?), and could present complex ethical questions (how should we treat a machine that can experience suffering?), some AI engineers are starting to believe that achieving the elusive goal of artificial general intelligence—which would entail a machine that is not only exceptionally intelligent but also possesses human-like understanding, creativity, and common sense—may necessitate something akin to consciousness. The informal taboo against conscious AI, previously assumed to be unsettling for the public, has begun to erode.
A pivotal moment arrived in the summer of 2023 when a group of 19 prominent computer scientists and philosophers released an 88-page report titled “Consciousness in Artificial Intelligence,” informally referred to as the Butlin report. Within days, it seemed that everyone in both the AI and consciousness research communities had engaged with it. The abstract of the draft report contained this striking sentence: “Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious barriers to building conscious AI systems.”
The authors admitted that part of their motivation for assembling the group and drafting the report stemmed from “the case of Blake Lemoine.” A coauthor remarked to Science magazine, “If AIs can give the impression of consciousness, that makes it an urgent priority for scientists and philosophers to weigh in.”
However, what truly captured attention was the declaration in the abstract: “no obvious barriers to building conscious AI systems.” Upon first reading those words, it felt like we had crossed a significant threshold—one that transcended mere technological advancements. This shift pertains to our very identity as a species.
What implications would arise for humanity if we were to discover, in the not-so-distant future, that a fully conscious machine had emerged? It would likely represent a Copernican moment, abruptly shifting our sense of centrality and significance. For millennia, humans have defined themselves against “lesser” animals, often denying them traits believed to be uniquely human, such as emotions (one of Descartes’s most egregious mistakes), language, reasoning, and consciousness. In recent years, many of these distinctions have collapsed, as scientists reveal that numerous species possess intelligence, consciousness, emotions, and even the ability to use language and tools, challenging long-held notions of human exceptionalism. This ongoing transformation raises complex questions about our identity and moral responsibilities toward other species.
With AI developing as a new frontier, the challenge to our elevated self-image arises from a different source. Now, we’ll need to define our identity concerning AIs rather than other animals. As computational algorithms surpass human capabilities—easily defeating us in games like chess and Go, and excelling in complex areas such as mathematics—we can take comfort in the fact that we (and many other species) still retain the qualities of consciousness, allowing for feelings and subjective experiences. In this context, AI could act as a common adversary, uniting humans and other animals: us against it, the living against the machines. This newfound solidarity could create a heartwarming narrative and may be beneficial for animals integrated into Team Conscious. However, what occurs if AI starts to challenge the human—or, more broadly, the animal—claim to consciousness? What will our identity be then?
