We are far from creating sentient machines, neuroscientists argued in an opinion piece published in the journal Trends in Neurosciences.
When humans interact with AI systems like ChatGPT, we consciously perceive the text generated by the language model.
The real question, posed by neuroscientists from Estonia, Germany, and Australia, is whether the AI is capable of such perception as well, in terms of the inputs we use to interact with it, or if it’s simply a machine running on a pattern-matching algorithm like a clever robot.
In this study, Jaan Aru, Matthew Larkum, and Mac Shine approach this question from a neuroscientific perspective.
These three researchers, all with backgrounds in neuroscience, contend that even though systems like ChatGPT may appear to respond as if they are conscious, they are probably not.
They offer several reasons to support this claim.
First, the information given to language models is different from the way we perceive the world through our senses.
Second, they point out that current AI algorithms lack critical features of the thalamocortical system, which is associated with conscious awareness in the mammalian brain.
Lastly, they argue that the way conscious living things like humans develop and evolve is very different from how AI systems work today. Living things depend on their actions and survival, which involve complex processes at various levels, from cells to the entire organism, leading to their ability to think and be aware.
AI systems don’t have these processes in the same way.
The authors emphasize that the mechanisms underlying current language models are likely far more complex than those responsible for consciousness in the human brain.
They note that there is no consensus among scientists about how consciousness arises in our brains. For example, real biological neurons differ significantly from the neurons in artificial neural networks.
Biological neurons are physical entities capable of growth and shape changes, while artificial neural network neurons, often called neural networks or neural nets, that are a type of machine learning model, even if constructed based on how neurons work in animal brains, are merely lines of code at the core.
In conclusion, the authors caution against assuming that systems like ChatGPT are conscious, as such an assumption would underestimate the intricate complexity of the neural mechanisms responsible for consciousness in the human brain.
Even as generative AI models like ChatGPT roll out new features that are slowly creating disruptions in many occupations, enough so that the World Economic Forum estimated that 85 million jobs will be lost to AI by 2025, its inability to become truly aware is an idea that we can find solace in.
Loading the player...
Sonia Carpentier, Piaget’s Brand Director, on how she landed her dream job
More Top Stories:
TV consumption surges among younger audiences, records over 7% growth