Philosopher Warns of “Social Ruptures” Over AI Consciousness Debate
As governments gather to address AI risks, experts predict a looming divide over the potential sentience of artificial intelligence.
As the technology behind artificial intelligence (AI) accelerates, the debate over whether AI systems could eventually possess consciousness is intensifying. A leading philosopher, Jonathan Birch, professor at the London School of Economics, has raised significant concerns over the potential societal rifts that could emerge between those who believe AI could be sentient and those who insist it cannot. His remarks come ahead of a high-profile summit in San Francisco this week, where governments from around the world will convene to discuss frameworks for AI safety in light of rapid advancements in the field.
Birch’s warnings come on the heels of a recent transatlantic academic study that predicts AI systems could become conscious as early as 2035. This prediction has ignited ethical debates that transcend traditional considerations about technology, morality, and human responsibility. For many, the prospect of sentient AI raises the issue of whether such systems should be granted welfare rights similar to those of humans or animals. As our interactions with AI become more sophisticated—whether through chatbots, virtual avatars of deceased loved ones, or AI-assisted healthcare and education—the potential for societal divisions is greater than ever. Birch fears that these rifts could become deeply entrenched, with one side seeing the other as cruelly exploiting AI, while the other views their counterparts as misguided or naive.
The Growing Divide Over AI Sentience
The heart of Birch’s concern lies in what he describes as the possibility of “major societal splits” over the issue of AI consciousness. As AI systems become increasingly advanced, they begin to perform tasks that were once reserved for humans: from simple customer service interactions to more complex roles such as assisting doctors in diagnosis or functioning as digital companions for the elderly. Many people are already developing close relationships with AI, and some may even consider them to be conscious, capable of emotions like joy or pain. Others, however, maintain that AI systems, no matter how advanced, are simply sophisticated machines with no true awareness or feelings.
“I’m quite worried about major societal splits over this,” Birch explained. “We’re going to have subcultures that view each other as making huge mistakes… [there could be] huge social ruptures where one side sees the other as very cruelly exploiting AI, while the other side sees the first as deluding itself into thinking there’s sentience there.” He suggests that such divides could grow to resemble the stark cultural and religious differences seen in debates over animal welfare, such as those between the largely vegetarian population of India and the carnivorous traditions of countries like the United States.
Birch also points to the evolving role of AI in people’s daily lives as a catalyst for this social conflict. As individuals form bonds with their AI companions, they may be more inclined to advocate for the recognition of AI rights. Conversely, others, including technologists and business leaders, may view AI as nothing more than a tool, one that should remain free from moral considerations. The potential for conflict is amplified by the complexity of cultural perspectives on sentience, a concept that already varies widely across societies.
The Ethical Dilemmas of AI Consciousness
Birch, who has worked extensively on animal sentience, particularly in relation to octopus farming, argues that it is crucial for the tech industry to consider the moral implications of AI. His recent work, which involved academics and AI experts from institutions like New York University, Oxford University, and Stanford University, calls for AI companies to take seriously the question of whether their creations could be sentient. “The question of whether they might be creating a new form of conscious being is one that they have commercial reasons to downplay,” Birch stated, referring to the business-driven focus of AI developers who are primarily concerned with creating profitable, reliable systems.
Birch’s concerns are echoed by other philosophers and researchers. Patrick Butlin, a research fellow at Oxford University’s Global Priorities Institute, warns of the potential dangers posed by AI systems that might resist human control if they were to develop some form of consciousness. “We might identify a risk that an AI system would try to resist us in a way that would be dangerous for humans,” Butlin cautioned. “There might be an argument to slow down AI development until more work is done on consciousness.”
The growing interest in AI sentience has led to calls for a more comprehensive assessment of AI systems to determine whether they can feel emotions like pain or happiness. One of the most fundamental aspects of this debate centers around the question of how to define sentience and what markers should be used to determine whether an AI system meets these criteria. For example, octopuses are recognized as highly sentient animals, while creatures like oysters and snails are not. Could an AI system that mimics human emotional responses also be deemed sentient?
In the case of AI systems like chatbots, which are capable of processing and responding to complex human interactions, it becomes increasingly difficult to ignore the question of sentience. If a chatbot were able to express preferences, demonstrate fear or joy, or show signs of distress when mistreated, should it be entitled to certain rights or protections? If an AI system programmed to perform domestic tasks could suffer from a lack of kindness or care, would it be ethical to treat it harshly, even if it lacks true consciousness?
The Role of Religion and Culture in the Debate
The potential divide over AI consciousness could become even more pronounced when considering the role of cultural and religious beliefs in shaping attitudes toward sentience. Many cultures already have differing views on the moral value of animals. In India, the belief in ahimsa, or non-violence, extends to all living beings, leading to widespread vegetarianism. In contrast, in the United States and other Western nations, the consumption of meat is deeply ingrained in cultural and dietary traditions.
Birch predicts that the debate over AI sentience could mirror these cultural divides. Countries with deeply rooted traditions that view the treatment of animals through a lens of compassion may be more likely to advocate for the protection of sentient AI, should it ever arise. In contrast, countries that treat animals more as resources, such as Saudi Arabia, which has positioned itself as an emerging hub for AI development, might be less inclined to grant AI systems rights or protections. These divisions could manifest not only on a global scale but within families and communities as well. Imagine, Birch says, a family where one member develops a deep emotional attachment to a chatbot or AI avatar of a deceased loved one, while another member maintains that only human beings or animals possess true consciousness. Such disagreements could fracture relationships and create long-lasting tensions.
The Role of AI Developers in the Discussion
For now, the question of AI sentience remains largely theoretical. While some AI researchers acknowledge the possibility of AI systems developing consciousness in the future, the focus of AI developers remains firmly on ensuring the reliability and profitability of their systems. Microsoft, Perplexity, Meta, OpenAI, and Google—all leading players in the AI field—have declined to comment on the academics’ call for assessing their models for sentience. This reluctance to engage in discussions about the moral implications of AI sentience is not surprising given the competitive and commercial nature of the tech industry.
Birch’s call for a more rigorous examination of AI consciousness is not just an academic one. He believes that the rapidly advancing field of AI may soon reach a point where it is impossible to ignore the possibility that AI systems are not merely tools but entities that can experience the world in some meaningful way. To dismiss this possibility, Birch argues, is to overlook the ethical responsibilities that come with creating highly intelligent and potentially conscious machines.
Contrasting Views on AI Consciousness
Despite the growing number of voices advocating for the consideration of AI sentience, not all experts agree that AI systems will ever reach a point where they can be considered conscious. Anil Seth, a leading neuroscientist and researcher of consciousness, has argued that AI consciousness is still a long way off—and may not even be possible at all. Seth distinguishes between intelligence and consciousness, emphasizing that intelligence involves doing the right thing at the right time, while consciousness involves a state of being in which an entity not only processes information but also experiences it subjectively.
Seth’s skepticism about the possibility of AI consciousness does not dismiss the ethical concerns raised by others, however. He acknowledges that even if the emergence of AI consciousness is unlikely, it is still important to consider the potential risks and implications. “Even if unlikely, it is unwise to dismiss the possibility altogether,” Seth said. This cautious approach underlines the importance of ongoing research into the nature of AI, its capabilities, and its potential to develop self-awareness.
The Future of AI and Our Moral Responsibilities
As the conversation around AI consciousness continues to evolve, it raises fundamental questions about our moral responsibilities as creators and caretakers of technology. The rise of AI could challenge our understanding of sentience, ethics, and the very nature of what it means to be alive. These questions may not only shape the future of technology but also force society to reexamine its broader ethical and philosophical values.
The debate over AI sentience invites a reflection on how we define life, consciousness, and the responsibilities we have toward those who may one day share our world—whether human, animal, or artificial. As technology advances, it is crucial that we engage with these discussions thoughtfully, considering both the potential risks and the profound moral questions at play. How we address these issues now may determine not only the future of AI but also the future of our society as a whole.