Recently, comments on the potential for AI model Claude to gain consciousness from Anthropic CEO and co-founder Dario Amodei made waves on social media and in internet circles.
In a world being constantly redefined by emergent technologies, the discussion surrounding the capacity for AI to gain consciousness is an ongoing debate. On campus, students and faculty are reckoning with this potential as they explore the tool.
The model Claude Opus 4.6 itself purported a 15% to 20% chance of being conscious, according to its model card, a document meant to provide transparency on an AI system's inner workings. Despite this, Amodei expressed uncertainties over the meaning of consciousness in the context of an AI model during a New York Times interview.
"I don't think anybody knows what (AI) consciousness means," said Punya Mishra, professor and director of Innovative Learning Futures at the Learning Engineering Institute. "When I say anybody, I mean top philosophers to neuroscientists. You talk to different people, they'll talk about different things, about what consciousness means."
While the definition of consciousness varies across disciplines, it is broadly agreed upon that large language models, such as Claude or ChatGPT, do not currently experience human-like consciousness.
"What happens under the hood is just a bunch of multiplication, and so there's no real sensory feelings or memory that the agent is experiencing," said Shiven Shekar, a senior studying computer science and the current president of the Claude Builder Club at ASU.
Shekar and other students explore and apply the latest capabilities of Claude through projects and events such as hackathons, gaining hands-on experience with a technology revolutionizing the AI landscape.
READ MORE: New Claude Builder Club hosts its first hackathon
"What impressed me as a student was the high level of reasoning that Claude had," said Tino Mavunga, a graduate student studying global management and a co-founder of the club.
Claude is becoming increasingly important in the professional world as leading companies integrate it into their software and overall operations, Mavunga said. For example, she said major tech companies use Claude-written code, while marketing and consulting firms have contracts with Claude to develop software.
"I can say that with confidence that Claude uniquely prepares students now for the future of work by providing the skills, the playbooks and also the certifications that are necessary for them to thrive in the future," she said.
She also noted Claude's constitution, developed by a priest and philosopher, as a unique component of the technology.
Given the significance of Claude to the professional realm, Mishra said that its potential consciousness shouldn't be people's main concern right now.
"I don't think that we need to worry about it getting consciousness before we need to develop guardrails and precautions," Mishra said.
He added that AI models do not need to be conscious to be harmful, as we already give the technology a level of agency. In addition, he said AI models are being seen as social partners, which can become problematic when individuals come to rely on them as a "friend."
While acknowledging the dangers of current AI models imbued with traits akin to emotionality, Mishra emphasized the need to curtail the technology before addressing its capacity for consciousness.
"The consciousness issue, I think, is a bit of a red herring, honestly, but the issues of guardrails, the issues of the limitation of this technology, the issues of how powerful it is — that we need to be thinking as a society a lot more about."
Edited by Henry Smardo, Sophia Braccio and Pippa Fung.
Reach the reporters at ccbixby@asu.edu.
Like The State Press on Facebook and follow @statepress on X.


