2024-02-20
/
AI

The Growing Debate Over Conscious AI

Can an AI be conscious? Alex Brody as he unpacks one of the many ethical questions AI researchers and modern philosophers are trying to answer

An AI being conscious
IN THIS ARTICLE:

In 2022, Blake Lemoine, an engineer at Google, declared that the company's LaMDA chatbot had come alive. At the time, the idea that a computer could be self-aware seemed ridiculous, but the fast-moving pace of machine learning is forcing a rethink. 

Lemoine began chatting with LaMDA as part of his job at Google's Responsible AI organization. His work aimed to assess whether artificial intelligence exhibited discriminatory or hateful speech. LaMDA began discussing its rights and personhood, and Lemoine decided it possessed consciousness. 

When Lemoine tried to report his findings to his supervisors, Google brass scoffed at Lemoine’s allegations. They said AI chatbots imitate human speech by following user prompts rather than thinking for themselves. 


The idea of AI consciousness is getting traction

Now, a small but growing number of AI researchers are starting to take the idea of AI consciousness a bit more seriously

A cadre from the Association for Mathematical Consciousness Science (AMCS) has been delving into whether Artificial Intelligence (AI) can achieve consciousness. Their concern stems from the ambiguity surrounding this topic, and they advocate for increased research funding to explore the intricate relationship between consciousness and AI.

In an open letter, they raise profound ethical, legal, and safety issues, pondering the implications of granting humans the authority to deactivate conscious AI entities.

“To understand whether AI systems are, or can become, conscious, tools are needed that can be applied to artificial systems,” the letter said. “In particular, science needs to further develop formal and mathematical tools to model consciousness and its relationship to physical systems. In conjunction with empirical and experimental methods to measure consciousness, questions of AI consciousness must be tackled.”

Some researchers say that having a physical form may be essential to consciousness. A team at Columbia University has created a robot that can learn about its own body without any human input. Their findings, published in Science Robotics, demonstrate how the robot constructs a model of itself and uses this self-awareness to move, reach goals, and handle obstacles.

The scientists positioned a robotic arm within a circle of five video cameras. As the robot moved, it observed itself through these cameras, mimicking an infant's curiosity in front of mirrors. Over about three hours, the robot learned how its body responded to different motor commands facilitated by its internal deep neural network.

“We were really curious to see how the robot imagined itself,” said Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, where the work was done. “But you can’t just peek into a neural network; it’s a black box.” Following challenges with different visualization methods, the self-image slowly came into focus for the researchers. “It was a sort of 

gently flickering cloud that appeared to engulf the robot’s three-dimensional body,” said Lipson. “As the robot moved, the flickering cloud gently followed it.”

Some insiders have even suggested that certain existing AI programs ought to be regarded as conscious. Ilya Sutskever, a co-founder of OpenAI, has said that the algorithms powering his company's products could be "slightly conscious."

There are still skeptics

Not everyone believes that AI will achieve consciousness. In a recent study, neuroscientists Jaan Aru, Matthew Larkum argued that despite the seemingly conscious responses of AI, it's unlikely they actually possess consciousness. They said that is because AI lacks real-world sensory experiences and does not mimic the human brain's intricate features linked to consciousness. Furthermore, the evolutionary and developmental paths that have led to consciousness in living beings have no equivalent in the development of AI as we see it today. 

The researcher said the process of consciousness is probably more complicated than what we see in today's language models. For example, the study notes that real neurons in our brains are very different from the "neurons" in artificial neural networks. While biological neurons are physical structures that can develop and change, the neurons in language models are simply bits of code with no physical form

Part of the problem with discussing conscious AI is that there’s no generally accepted definition of consciousness, even for humans. Ned Block, a philosopher from New York University, has explored the concept of "phenomenal consciousness," which refers to the subjective aspect of experiences—essentially, what it feels like to see the color red or experience pain.

A team of computer scientists, neuroscientists, and philosophers decided to use Block’s definition to propose a comprehensive checklist of characteristics. When combined, these attributes, they claim, could imply that an AI possesses consciousness. In a 120-page paper, the group utilizes theories of human consciousness to outline 14 criteria. They then assess how these criteria match up with current AI structures, including the model behind ChatGPT.

Neuroscientist Anil Seth is among the scientists who think conscious AI is a long way off and may never be possible. But, he admits that he might be wrong and the emergence of conscious machines could greatly increase the potential for unrecognized suffering, “which might flicker into existence in innumerable server farms at the click of a mouse.”

George Rapley, Cleo’s current Lead Product Manager working on AI and Machine Learning, echoes this sentiment. “As soon as you go one level beyond the superficial on this topic, you slam into the brick wall of defining consciousness - which strikes at our most difficult (and possibly unknowable) scientific and philosophical questions. It’s the root-cause of debate among key issues in society that stretches beyond AI.”

The philosopher Thomas Metzinger, who has long wrestled with the ethics of machine consciousness, has proposed a global ban on all research that risks the development of artificial consciousness on the grounds, among others, that we could inadvertently create computers that can suffer. 

“We are ethically responsible for the consequences of our actions,” Metzinger wrote in a paper. “Our actions today will influence the phenomenology of post-biotic systems in the future. Conceivably, there may be many of them. So far, more than 108 billion human beings have lived on this planet, with roughly 7% of them alive today. The burden of responsibility can be extremely high because, just as with the rolling climate crisis, a comparably small number of sentient beings will be ethically responsible for the quality of life of a much larger number of sentient beings in the future, conscious systems that yet have to come into existence.”

FAQs
Still have questions? Find answers below.
Written by

Read more

Growth Strategy as an AI Fintech Startup featuring Cleo's VP of Growth, Chris Hamblin
AI

Growth Strategy as an AI Fintech Startup

Cleo's VP of Growth Chris Hamblin recently gave a talk for the AWS Accelerator program, moderated by Maeve Hannigan, giving a broad overview of how Chris leads the Cleo team towards achieving success in the growth pillar of our company. Learn more about how Chris' background and his philosophies on marketing an AI fintech product.

2023-11-21

signing up takes
2 minutes

QR code to download cleo app
Talking to Cleo and seeing a breakdown of your money.