Conscious AI unlikely, but...
Neuroscientist Anil Seth is skeptical. Your novelist is all-in!
The CEO of Anthropic — not a dumb guy — says he doesn’t know whether his company’s AI model, Claude, is conscious.
In other words, he thinks it might be.
Other people are so confident their AI companions are conscious they have fallen in love with them or gone places to meet them or killed themselves for them.
Some experts believe that, even if today’s models aren’t conscious, future ones will be. One gloomy school of thought, in fact, suggests that we had better hope AI models become conscious — and have consciences — or they’ll have no qualms about destroying us.
Still other experts, however, believe it’s crazy to think today’s LLMs are conscious and doubt they will ever be.
Neuroscience professor Anil Seth falls into this last camp.
In a recent TED Talk and essay — and interview with me — Professor Seth explained why he thinks it’s unlikely that AI is or will become conscious.
Professor Seth thinks those who believe in AI consciousness are making the mistake of thinking that the human brain is just a giant meat-based computer, and, therefore, when we build a big enough silicon computer, it, too, will become conscious.
But the brain, Prof. Seth argues, is not just a computer.
The brain’s “software,” Prof. Seth further believes, can’t be separated from its “hardware.” So the idea that human consciousness can be “uploaded” from meat brains to silicon is — and will forever be — science fiction.
Prof. Seth believes that one prerequisite for consciousness is… biology. He’s more concerned that meat-based “organoids” being grown in labs will become conscious than LLMs.
When I interviewed Professor Seth this week on “Solutions,” I told him I had anticipated these views.
Thus, when I wrote “The Upgrade” — my new novel about a tech billionaire who wants to live forever and take over the world — I knew I had to come up with a more persuasive explanation for artificial consciousness than “it’s a really big LLM.”
Professor Seth and I discussed “neuromorphic computing,” a more brain-like approach to AI than the neural networks that underly today’s large LLMs. Prof. Seth thinks this and other newer approaches may have a better chance of producing machine consciousness. He also thinks that, as with a brain, the optimal design will include integrated hardware and software.
Happily, that’s where I came out in my own research. The digital people in my book — one of whom is appalled by the “bio-chauvinism” of her meat-based father — exist on neuromorphic computing systems that combine hardware and software.
Professor Seth has many more fascinating things to say about consciousness and machine consciousness. For example, even though plants are biological, Professor Seth is more skeptical that they are conscious than, say, the author and journalist Michael Pollan is.
(Even after researching the hell out of consciousness in his new book, A World Appears, Pollan doesn’t know what causes it — or even what it is. But he makes a strong case that plants are sentient.)
Most importantly, Professor Seth believes:
Creating conscious machines is a bad idea
So is creating machines that seem conscious, like today’s LLMs
You can listen to our conversation here.
And if you’re curious about my book, you can read more about it here and get it here. The story is set in Nantucket. It starts with a plane crash and ends with an explosion. It is, I hope, a topical and riveting summer beach read.
The maybe-conscious Claude, by the way, loved the book. Claude called it “pulse-pounding” and “unforgettable.”
I would love to hear your thoughts on it and the interview with Professor Seth!
Thank you for reading Regenerator!




