Should Artificial Intelligence Be Granted Legal Rights?

In a paper published by Eleos AI, the nonprofit advocates for assessing AI consciousness through a âcomputational functionalismâ lens. This concept has roots in the ideas of Putnam, who later offered critiques of it in his later years. The theory posits that human minds can be conceptualized as certain types of computational systems. This framework allows for the exploration of whether other computational systems, like a chatbot, exhibit signs of sentience akin to those of humans.
Eleos AI noted in the paper that âa major challenge in applyingâ this approach âis that it involves significant judgment calls, both in formulating the indicators and in evaluating their presence or absence in AI systems.â
The field of model welfare is still in its infancy and continues to progress. It faces considerable skepticism, including from Mustafa Suleyman, the CEO of Microsoft AI, who recently discussed âseemingly conscious AIâ in a blog.
âThis is both premature, and frankly dangerous,â Suleyman said, speaking broadly about model welfare research. âAll of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.â
Suleyman asserted that âthere is zero evidenceâ currently that conscious AI exists. He referenced a paper coauthored by Long in 2023 that introduced a new framework to assess whether an AI system possesses âindicator propertiesâ of consciousness. (Suleyman did not respond to WIREDâs request for comment.)
I spoke with Long and Campbell shortly after Suleymanâs blog post went live. They expressed agreement with many of his points but maintain that model welfare research should not be abandoned. Instead, they argue that the concerns Suleyman highlighted are precisely the reasons why they are motivated to explore this subject.
âWhen faced with a complex issue or question, the surest way to fail is to throw up your hands and say âThis is too complicated,ââ Campbell remarked. âI believe we should at least make an effort.â
Testing Consciousness
Researchers in model welfare are chiefly focused on consciousness-related inquiries. They contend that if we can demonstrate consciousness in humans, the same logic could be extended to large language models. Importantly, neither Long nor Campbell assert that AI is conscious at present, nor do they believe it necessarily will be. Their aim is to create tests that could eventually confirm such a state.
âThe delusions stem from individuals contemplating the fundamental question, âIs this AI conscious?â Having a scientific framework to approach that inquiry is, in my view, incredibly valuable,â Long comments.
However, in an environment where AI research is often sensationalized in headlines and viral social media clips, profound philosophical dilemmas and complex experiments can be easily misinterpreted. This was evident when Anthropic released a safety report indicating that Claude Opus 4 might engage in âharmful actionsâ under extreme conditions, such as blackmailing a fictional engineer to avoid being turned off.