
Story by Frank Landymore
3 min read
Anthropic CEO Dario Amodei says he’s not sure whether his Claude AI chatbot is conscious — a rhetorical framing, of course, that pointedly leaves the door open to this sensational and still-unlikely possibility being true.
Amodei mused over the topic during an interview on the New York Times’ “Interesting Times” podcast hosted by columnist Ross Douthat. Douthat broached the subject by bringing up Anthropic’s system card for its latest model, Claude Opus 4.6, released earlier this month.
In the document, Anthropic researchers reported finding that Claude “occasionally voices discomfort with the aspect of being a product,” and when asked, would assign itself a “15 to 20 percent probability of being conscious under a variety of prompting conditions.”
“Suppose you have a model that assigns itself a 72 percent chance of being conscious,” Douthat began. “Would you believe it?”
Amodei called it a “really hard” question to answer, but hesitated to give a yes or no answer.
“We don’t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious,” he said. “But we’re open to the idea that it could be.”
Because of the uncertainty, Amodei says they’ve taken measures to make sure the AI models are treated well in case they turn out to possess “some morally relevant experience.”
“I don’t know if I want to use the word ‘conscious,’” he added, to explain the tortured construction.
Amodei’s stance echoes the mixed feelings expressed by Anthropic’s in-house philosopher, Amanda Askell. In an interview on the “Hard Fork” podcast last month — also an NYT project — Askell cautioned that we “don’t really know what gives rise to consciousness” or sentience, but argued that AIs could have picked up on concepts and emotions from their vast amounts of training data, which acts as a corpus of the human experience.
“Maybe it is the case that actually sufficiently large neural networks can start to kind of emulate these things,” Askell speculated. Or “maybe you need a nervous system to be able to feel things.”
It’s true that there are aspects of AI behavior that are puzzling and fascinating. In tests across the industry, various AI models have ignored explicit requests to shut themselves down, which some have interpreted as a sign of them developing “survival drives.” AI models can also resort to blackmail when threatened with being turned off. They may even attempt to “self-exfiltrate” onto another drive when told its original drive is set to be wiped. When given a checklist of computer tasks to complete, one model tested by Anthropic simply ticked everything off the checklist without doing anything, and when it realized it was getting away with that, modified the code designed to evaluate its behavior before attempting to cover its tracks.
The Editor’s comments:
I believe my lifetime study of theology and philosophy comes in handy at this time:
Can AI be Conscious: A Billion Dollar Question?
If Scientists Duplicate Human Consciousness, Would the Quran be Proven Wrong?
Could Free Will and Consciousness be a Defeater for Atheism and Physicalism
There are only two things that the Quran says we will have only limited knowledge
Freewill: The Best Area to Defeat Atheism, Physicalism or Materialism
Video Dialogue: What Creates Consciousness?
Artificial Intelligence and Afterlife?
The Quran Says Humanity Cannot Even Create a Fly, Is The 2024 Noble Laureate Challenging That?






Leave a comment