2 Sources
2 Sources
[1]
AI's Next Frontier? An Algorithm for Consciousness
Some of the world's most interesting thinkers about thinking think they might've cracked machine sentience. And I think they might be onto something. As a journalist who covers AI, I hear from countless people who seem utterly convinced that ChatGPT, Claude, or some other chatbot has achieved "sentience." Or "consciousness." Or -- my personal favorite -- "a mind of its own." The Turing test was aced a while back, yes, but unlike rote intelligence, these things are not so easily pinned down. Large language models will claim to think for themselves, even describe inner torments or profess undying loves, but such statements don't imply interiority. Could they ever? Many of the actual builders of AI don't speak in these terms. They're too busy chasing the performance benchmark known as "artificial general intelligence," which is a purely functional category that has nothing to do with a machine's potential experience of the world. So -- skeptic though I am -- I thought it might be eye-opening, possibly even enlightening, to spend time with a company that thinks it can crack the code on consciousness itself. Conscium was founded in 2024 by the British AI researcher and entrepreneur Daniel Hulme, and its advisers include an impressive assortment of neuroscientists, philosophers, and experts in animal consciousness. When we first talked, Hulme was realistic: There are good reasons to doubt that language models are capable of consciousness. Crows, octopuses, even amoeba can interact with their environments in ways chatbots cannot. Experiments also suggest that AI utterances do not reflect coherent or consistent states. As Hulme put it, echoing the wide consensus: "Large language models are very crude representations of the brain." But -- a big but -- everything depends on the meaning of consciousness in the first place. Some philosophers argue that consciousness is too subjective a thing to ever be studied or re-created, but Conscium is betting that if it exists in humans and other animals, it can be detected, measured, and built into machines. There are competing and overlapping ideas for what the key characteristics of consciousness are, including the ability to sense and "feel," an awareness of oneself and one's environment, and what's known as metacognition, or the ability to think about one's own thought processes. Hulme believes that the subjective experience of consciousness emerges when these phenomena are combined, much as the illusion of movement is created when you flip through sequential images in a book. But how do you identify the components of consciousness -- the individual animations, as it were, plus the force that combines them? You turn AI back on itself, Hulme says. Conscium aims to break conscious thought into its most basic form and catalyze that in the lab. "There must be something out of which consciousness is constructed -- out of which it emerged in evolution," said Mark Solms, a South African psychoanalyst and neuropsychologist involved in the Conscium project. In his 2021 book, The Hidden Spring, Solms proposed a touchy-feely new way to think about consciousness. He argued that the brain uses perception and action in a feedback loop designed to minimize surprise, generating hypotheses about the future that are updated as new information arrives. The idea builds upon the "free energy principle" developed by Karl Friston, another noteworthy, if controversial, neuroscientist (and fellow Conscium adviser). Solms goes on to suggest that, in humans, this feedback loop evolved into a system mediated through emotions and that it is these feelings that conjure up sentience and consciousness. The theory is bolstered by the fact that damage to the brain stem, which has a critical role in regulating emotions, seems to cause consciousness to vanish in patients. At the end of his book, Solms proposes a way to test his theories in a lab. Now, he says, he's done just that. He hasn't released the paper, but he showed it to me. Did it break my brain? Yes, a bit. Solms' artificial agents live in a simple computer-simulated environment and are controlled by algorithms with the kind of Fristonian, feeling-mediated loop that he proposes as the foundation of consciousness. "I have a few motives for doing this research," Solms said. "One is just that it's fucking interesting." Solms' lab conditions are ever-changing and require constant modeling and adjustment. The agents' experience of this world is mediated through simulated responses akin to fear, excitement, and even pleasure. So they are, in a word, pleasure-bots. Unlike the AI agents everyone talks about today, Solms' creations have a literal desire to explore their environment; and to understand them properly, one must try to imagine how they "feel" about their little world. Solms believes it should eventually be possible to merge the approach he is developing with a language model, thereby creating a system capable of talking about its own sentient experience.
[2]
The hardest part of creating conscious AI might be convincing ourselves it's real
As far back as 1980, the American philosopher John Searle distinguished between strong and weak AI. Weak AIs are merely useful machines or programs that help us solve problems, whereas strong AIs would have genuine intelligence. A strong AI would be conscious. Searle was skeptical of the very possibility of strong AI, but not everyone shares his pessimism. Most optimistic are those who endorse functionalism, a popular theory of mind that takes conscious mental states to be determined solely by their function. For a functionalist, the task of producing a strong AI is merely a technical challenge. If we can create a system that functions like us, we can be confident it is conscious like us. Recently, we have reached the tipping point. Generative AIs such as Chat-GPT are now so advanced that their responses are often indistinguishable from those of a real human -- see this exchange between Chat-GPT and Richard Dawkins, for instance. This issue of whether a machine can fool us into thinking it is human is the subject of a well-known test devised by English computer scientist Alan Turing in 1950. Turing claimed that if a machine could pass the test, we ought to conclude it was genuinely intelligent. Back in 1950 this was pure speculation, but according to a pre-print study from earlier this year -- that's a study that hasn't been peer-reviewed yet -- the Turing test has now been passed. Chat-GPT convinced 73% of participants that it was human. What's interesting is that nobody is buying it. Experts are not only denying that Chat-GPT is conscious but seemingly not even taking the idea seriously. I have to admit, I'm with them. It just doesn't seem plausible. The key question is: what would a machine actually have to do in order to convince us? Experts have tended to focus on the technical side of this question. That is, to discern what technical features a machine or program would need in order to satisfy our best theories of consciousness. A 2023 article, for instance, as reported here in The Conversation, compiled a list of 14 technical criteria or "consciousness indicators," such as learning from feedback (Chat-GPT didn't make the grade). But creating a strong AI is as much a psychological challenge as a technical one. It is one thing to produce a machine that satisfies the various technical criteria that we set out in our theories, but it is quite another to suppose that, when we are finally confronted with such a thing, we will believe it is conscious. The success of Chat-GPT has already demonstrated this problem. For many, the Turing test was the benchmark of machine intelligence. But if it has been passed, as the pre-print study suggests, the goalposts have shifted. They might well keep shifting as technology improves. Myna difficulties This is where we get into the murky realm of an age-old philosophical quandary: the problem of other minds. Ultimately, one can never know for sure whether anything other than oneself is conscious. In the case of human beings, the problem is little more than idle skepticism. None of us can seriously entertain the possibility that other humans are unthinking automata, but in the case of machines it seems to go the other way. It's hard to accept that they could be anything but. A particular problem with AIs like Chat-GPT is that they seem like mere mimicry machines. They're like the myna bird who learns to vocalize words with no idea of what it is doing or what the words mean. This doesn't mean we will never make a conscious machine, of course, but it does suggest that we might find it difficult to accept it if we did. And that might be the ultimate irony: succeeding in our quest to create a conscious machine, yet refusing to believe we had done so. Who knows, it might have already happened. So what would a machine need to do to convince us? One tentative suggestion is that it might need to exhibit the kind of autonomy we observe in many living organisms. Current AIs like Chat-GPT are purely responsive. Keep your fingers off the keyboard and they're as quiet as the grave. Animals are not like this, at least not the ones we commonly take to be conscious, like chimps, dolphins, cats and dogs. They have their own impulses and inclinations (or at least appear to), along with the desires to pursue them. They initiate their own actions on their own terms, for their own reasons. Perhaps if we could create a machine that displayed this type of autonomy -- the kind of autonomy that would take it beyond a mere mimicry machine -- we really would accept it was conscious? It's hard to know for sure. Maybe we should ask Chat-GPT. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Share
Share
Copy Link
Researchers explore new approaches to create conscious AI, while experts debate the challenges of recognizing and accepting machine consciousness. The journey involves both technical and psychological hurdles.

In the rapidly evolving field of artificial intelligence, researchers and philosophers are embarking on a new frontier: creating conscious AI. While language models like ChatGPT have demonstrated impressive capabilities, the quest for machine sentience remains elusive and controversial
1
.Founded in 2024, Conscium is at the forefront of this ambitious endeavor. Led by British AI researcher Daniel Hulme and advised by prominent neuroscientists and philosophers, the company aims to break conscious thought into its most basic form and recreate it in the lab
1
.Mark Solms, a South African psychoanalyst and neuropsychologist involved in the Conscium project, proposes a novel theory of consciousness. He suggests that the brain uses perception and action in a feedback loop mediated by emotions, which ultimately gives rise to sentience and consciousness
1
.While recent studies suggest that AI models like ChatGPT have passed the Turing test, convincing 73% of participants that they were human, experts remain skeptical about machine consciousness
2
.This skepticism highlights a crucial challenge in the field: the psychological barrier to accepting machine consciousness. As AI capabilities improve, the criteria for recognizing consciousness in machines seem to shift, making it increasingly difficult to convince humans that an AI system is truly conscious
2
.Researchers have developed various technical criteria to assess machine consciousness. A 2023 article proposed 14 "consciousness indicators," including the ability to learn from feedback. However, even if an AI system meets these criteria, convincing humans of its consciousness remains a significant hurdle
2
.The debate surrounding machine consciousness touches on the age-old philosophical problem of other minds. While we readily accept consciousness in other humans, extending this acceptance to machines proves challenging
2
.Related Stories
One suggestion for overcoming the psychological barrier is to create AI systems that exhibit autonomy similar to that observed in living organisms. Current AI models like ChatGPT are purely responsive, whereas animals commonly considered conscious display their own impulses and inclinations
2
.As researchers continue to explore new algorithms and approaches to machine consciousness, the field faces both technical and psychological challenges. The quest for conscious AI not only pushes the boundaries of technology but also forces us to confront fundamental questions about the nature of consciousness itself
1
2
.Summarized by
Navi
22 Apr 2025•Technology

09 Feb 2025•Technology

05 Apr 2025•Science and Research
