Curated by THEOUTPOST
On Sun, 9 Feb, 4:00 PM UTC
2 Sources
[1]
'I encountered the terror of never finding anything': The hollowness of AI art proves machines can never emulate genuine human intelligence
Looking at AI-generated art shows that machines may never truly understand the human mind, because there are states of mind that can never be automated. The concepts of "sentience" and "agency" in machines are muddled, particularly given that it's difficult to measure what these concepts are. But many speculate the improvements we are seeing in artificial intelligence (AI) may one day amount to a new form of intelligence that supersedes our now. Regardless, AI has been a part of our lives for many years -- and we encounter its invisible hand predominantly on the digital platforms most of us inhabit daily. Digital technologies once held immense promise for transforming society, but this utopianism feels like it's slipping away, argues technologist and author Mike Pepi, in his new book "Against Platforms: Surviving Digital Utopia" (Melville House Publishing, 2025). We have been taught that digital tools are neutral, but in reality, they are laden with dangerous assumptions and can lead to unintended consequences. In this excerpt, Pepi assesses whether AI -- the technology at the heart of so many of these platforms -- can ever emulate the human feelings that move us, through the prism of art. The Museum of Modern Art's atrium was packed to the brim the day I visited Refik Anadol's much-anticipated installation of Unsupervised (2022). As I entered, the crowd was fixated on a massive projection of one of the artist's digital "hallucinations." MoMA's curators tell us that Anadol's animations use artificial intelligence "to interpret and transform" the museum's collection. As the machine learning algorithm traverses billions of data points, it "reimagines the history of modern art and dreams about what might have been." I saw animated bursts of red lines and intersecting orange radials. Soon, globular facial forms appeared. The next moment, a trunk of a tree settled in the corner. A droning, futuristic soundtrack filled the room from invisible speakers. The crowd reacted with a hushed awe as the mutating projections approached familiar forms. Related: Just 2 hours is all it takes for AI agents to replicate your personality with 85% accuracy Anadol's work debuted at a moment of great hype about artificial intelligence's, or AI's, ability to be creative. The audience was not only there to see the fantastic animations on the screen. Many came to witness a triumph of machine creativity in the symbolic heart of modern art. Every visitor to Unsupervised encountered a unique mutation. Objects eluded the mind's grasp. Referents slipped out of view. The moments of beauty were accidental, random flashes of computation, never to return. Anadol calls it a "self-regenerating element of surprise;" one critic called it a screensaver. As I gazed into the mutations, I admit I found moments of beauty. It could have registered as relaxation, even bliss. For some, fear, even terror. The longer I stuck around, the more emptiness I encountered. How could I make any statement about the art before me when the algorithm was programmed to equivocate? Was it possible for a human to appreciate, let alone grasp, the end result? In need of a break, I headed upstairs to see Andrew Wyeth's Christina's World (1948), part of the museum's permanent collection. Christina's World is a realist depiction of an American farm. In the center of the frame, a woman lies in a field, gesturing longingly toward a distant barn. The field makes a dramatic sweeping motion, etched in an ochre grass. The woman wears a pink dress and contorts at a slight angle. The sky is gray, but calm. Most viewers are confronted by questions: Who is this woman, and why does she lie in this field? Christina was Andrew Wyeth's neighbor. At a young age, she developed a muscular disability and was unable to walk. She preferred to crawl around her parents' property, which Wyeth witnessed from his home nearby. Still, there are more questions about Christina. What is Wyeth trying to say in the distance between his subjects? What is Christina thinking in the moment that Wyeth captures? This tiny epistemological game plays out each time one views Christina's World. We consider the artist's intent. We try to match our interpretation with the historical tradition from which the work emerged. With more information, we can still further peer into the work and wrestle with its contradictions. This is possible because there is a single referent. This doesn't mean its meaning is fixed, or that we prefer its realism. It means that the thinking we do with this work meets an equal, human, creative act. The experience of Unsupervised is wholly different. The work is combinatorial, which is to say, it tries to make something new from previous data about art. The relationships drawn are mathematical, and the moments of recognition are accidental. Anadol calls his method a "thinking brush." While he is careful to explain that the AI is not sentient, the appeal of the work relies on the machine's encroachments on the brain. Anadol says we "see through the mind of a machine." But there is no mind at work at all. It's pure math, pure randomness. There is motion, but it's stale. The novelty is fleeting. In the atrium, Unsupervised presents thousands of images, but I can ask nothing of them. Up a short flight of steps, I am presented with a single image and can ask dozens of questions. The institution of art is the promise that some, indeed many, of those will be answered. They may not be done with certainty, but very few things are. Nonetheless, the audience still communes with the narrative power of Christina's World. With Unsupervised, the only thing reflected back was a kind of blank, algorithmic stare. I could not help but think that Christina's yearning gaze, never quite revealed, might not be unlike the gaping stare of the audience in the atrium below. As I peered into the artificially intelligent animations searching for anything to see, I encountered the terror of never finding anything -- a kind of paralysis of vision -- not the inability to perceive but the inability to think alongside what I saw. All artificial intelligence is based on mathematical models that computer scientists call machine learning. In most cases, we feed the program training data, and we ask various kinds of networks to detect patterns. Recently, machine learning programs can successfully perform evermore complex tasks thanks to increases in computing power, advancements in software programming, and most of all, an exponential explosion of training data. But for half a century, even the best AI was capped in its process, able only to automate predefined supervised analysis. For example, given a set of information about users' movie preferences and some data about a new user, it could predict what movies this user might like. This presents itself to us as "artificial intelligence" because it replaces and far surpasses, functionally, the act of asking a friend (or better yet, a book) for a movie recommendation. Commercially, it flourished. But could these same software and hardware tools create a movie itself? For many years, the answer was "absolutely not." AI could predict and model, but it could not create. A machine learning system is supervised because each input has a correct output, and the algorithm constantly fixes and retrains the model to get closer and closer to the point that the model can predict something accurately. But what happens when we don't tell the model what is correct? What if we gave it a few billion examples of cat images for training, and then told it to make a completely new image of a cat? In the past decade, this became possible with generative AI, a type of deep learning that uses generative adversarial networks to create new content. Two neural networks collaborate: one called a generator, which produces new data, and one called a discriminator, which instantly evaluates it. The generator and discriminator compete in unison, with the generator updating outputs based on the feedback from the discriminator. Eventually, this process creates content that is nearly indistinguishable from the training data. With the introduction of tools like ChatGPT, Midjourney, and DALL-E 2, generative AI boosters claim we have crossed into a Cambrian explosion broadly expanding the limits of machine intelligence. Unlike previous AI applications that simply analyzed existing data, generative AI can create novel content, including language, music, and images. The promise of Unsupervised is a microcosm for generative AI: fed with enough information, nonhuman intelligence can think on its own and create something new, even beautiful. Yet the distance between Christina's World and Unsupervised is just one measure of the difference between computation and thought. AI researchers frequently refer to the brain as "processing information." This is a flawed metaphor for how we think. As material technology advanced, we looked for new metaphors to explain the brain. The ancients used clay, viewing the mind as a blank slate upon which symbols were etched; the nineteenth century used steam engines; and later, brains were electric machines. Only a few years after computer scientists started processing data on mainframe computers, psychologists and engineers started to speak of the brain as an information processor. The problem is your brain is not a computer, and computers are not brains. Computers process data and calculate results. They can solve equations, but they do not reason on their own. Computation can only blindly mimic the work of the brain -- they will never have consciousness, sentience, or agency. Our minds, likewise, do not process information. Thus, there are states of mind that cannot be automated, and intelligences that machines cannot have.
[2]
Man vs machine: Can AI ever think consciously like us?
On January 27, Chinese researchers released DeepSeek-R1, which led to Nvidia losing nearly $600 billion in a day. This breakthrough highlighted AI's potential, reminiscent of AlphaGo's Move-37 moment in 2016, which defeated a Go champion. The article explores humanity's fear of AI surpassing human intelligence and the differences between machine learning and human consciousness.You're wondering who I am (secret, secret, I've got a secret) Machine or mannequin? (Secret, secret, I've got a secret) With parts made in Japan (secret, secret, I've got a secret) I am the modern man I've got a secret, I've been hiding under my skin My heart is human, my blood is boiling, my brain IBM -- "Mr Roboto" by Styx On January 27, Chinese researchers released DeepSeek-R1 Large Language Model(LLM) and created a roadkill moment for the US technology sector. Nvidia, the world's largest chipmaker, lost almost $600 billion in one day's trading. Something else also happened that day -- DeepSeek researchers published a paper on their model. It wasn't just the financial blow that unsettled researchers and netizens, but a revelation buried deep in that paper. Andrej Karpathy, founder of Eureka Labs, with former stints at Tesla and OpenAI, spotted something eerily familiar. In a post on X on January 29, he wrote, "'Move 37' is the word-of-day -- it's when an AI, trained via the trial-and-error process of reinforcement learning, discovers actions that are new, surprising and secretly brilliant even to expert humans. It is a magical, just slightly unnerving, emergent phenomenon.... with the latest crop of 'thinking' LLM models (e.g. OpenAI-o1, DeepSeek-R1, Gemini 2.0 Flash Thinking), we are seeing the first very early glimmers of things like it in open world domains.... I don't think we've seen equivalents of Move 37 yet.... But the technology feels on track to find them." Move-37 refers to a move played by AlphaGo, a machine, against Lee Seedol, a champion player of Go (a game akin to chess), in 2016. Experts estimated that there was 1 in 10,000 chance of a human playing such a move; yet the machine played it. The machine defeated Seedol eventually. Such Move-37 moments are what fill humanity with dread. Writers and filmmakers have always wondered about a post-human earth where humans would have done unto them what they did to other species. We, Homo sapiens, intermingled and became the sole surviving hominid. The others, Neanderthals, Denisovans, Homo erectus and Homo floresiensis, all went extinct. Having become the apex predator, our fear of death and, of course, extinction by a "higher consciousness" (machines) has always been reflected in our literature and films. The consequences of "the creation" surpassing its creator in mental prowess are a recurring theme. The original, of course, is Prometheus, a figure in Greek mythology, who stole fire (knowledge) from the Gods and gave it to humans. The act of overreaching, of defying the code, was condemned and Zeus punished Prometheus to have his liver eaten on a daily basis by an eagle. The liver would regenerate every day for the eagle to continue its feast. As AI breakthroughs are announced, everyone, including Elon Musk, is concerned about its impact on humanity. The human-dominated era is approaching its end, it would appear, and that is the cause of all the concern. SHORT HISTORY OF DOMINANCE Humans didn't get to the top of the food chain by playing nice. We can blame evolution for that. The early versions of the hominin, say Australopiths, had a brain size of a chimpanzee (~384 gm), while our brains are about 1,300 gm. It wasn't just the size of the brain. There was a structural change, too. Apart from the amygdala (the "crocodile brain"), which is responsible for fear and aggression, we developed the hippocampus for memory storage and the most significant part -- the cerebral cortex. It is responsible for attention, thought, perception, episodic memory, reasoning, decision-making, comprehension, articulation and linguistic fluency. This million-years process of the brain also created something that was more than just intelligence and that is consciousness. Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex, is a leading thinker on consciousness, famous for his integrated information theory (IIT) that has proposed a "metric" called "Phi" to measure consciousness. He says in his book Being You: "Somehow, within each of our brains, the combined activity of billions of neurons, each one a tiny biological machine, is giving rise to a conscious experience. And not just any conscious experience, your conscious experience, right here, right now. How does this happen? Why do we experience life in first person?" Consciousness is what makes each one of us experience the world from a unique point of view. As of now, machines don't have this faculty. Machines reason as do we, but are they conscious? This individuality matters. Throughout human history, social, scientific and theological advances were made because an individual challenged the existing world view. It was Werner Heisenberg (yeah, the uncertainty principle guy) who, during his months of solitude on the island of Helgoland, came up with an insight, there is no quantum reality beyond what is revealed till we observe it. Till an observation is made? That thing, that particle doesn't exist. It is as if when you turn your back, the table behind you will disappear and reappear when you turn again. This insight shook even Albert Einstein, who tried but could not disprove it. Let's take another example of human insight: Pablo Picasso's Les Demoiselles d'Avignon. This landmark painting ushered in Cubism and set up the progress of modern art. Picasso synthesised his understanding of multiple disciplines like African art, advances in geometry and spacetime, cinematography, X-ray technology and photography to create "geometric language of emergent cubism", to quote a book by Arthur I Miller that explores the lives of Einstein and Picasso. Synaptic leaps and synthesis are our essence. The current machine learning models use brute force of computation to memorise all the data and use probabilistic techniques to produce the answers. This approach works well for summarisation and producing creative works -- poems, paintings, et al -- that reflect shades of original works but do not contain their originality of surprise or their reflective capacity. Take, for instance, one of the greatest opening lines in English literature from George Orwell's 1984: "It was a bright cold day in April, and the clocks were striking thirteen." Nothing of this sort has yet been produced. This requires observation of life, realisation of its absurdity, absorption of the emotion it generates and its articulation. That needs consciousness, whatever that is. "I do not agree with your assumption that DeepSeek is conscious. It can be intelligent, and even exhibit impressive moments of insight like AlphaGo did, without being sentient or conscious. Consciousness is not the same as intelligence. I can't say if it is even possible for a silicon-based system to have the capacity for consciousness. Consciousness could be tied to the biological substrate, for all we know," says Professor Susan Schneider, director, Center for the Future Mind, Florida Atlantic University. WILL MACHINES HAVE DREAMS? While consciousness might be a very debatable topic, machines' specific intelligence capabilities cannot be doubted. Viswanathan Anand, five-time world chess champion, says, "We, chess players, are in a sense from the future. Computers became better than human players years ago. The Turing Test was passed long ago in chess. In fact, the way to distinguish between computers and humans in chess is by looking at the errors that humans make. But you have to remember that chess is a game and with a very specific set of rules." One of the key elements of progress in AI models would be the way they treat memory. We humans don't remember everything but all those neurons talking to each other are able to somehow fish out a specific piece of remembrance when the occasion demands. AI models still have some way to go in that area. "Memory retention in AI involves more than simply storing and retrieving static data -- it is about dynamically managing and recalling information in a way that enhances decision-making and contextual understanding. An ideal AI memory system should create associations, recognise patterns and retrieve information in a way that aligns with the current context. However, standard RAG/AI methods work more like databases that only store exact and segmented pieces of data. That is a major gap our method aims to bridge," Yu Su, distinguished assistant professor in the department of computer science and engineering and a former senior researcher at Microsoft, told ET in an earlier interview. That brings us back to humans and their dreams. It is said what you can dream you can become. So, let's return to Philip K Dick's question, "Do Androids dream of electric sheep?" With almost limitless memory and computational power, machines may dream--not of earth but of the wider galaxy that is waiting for that interstellar dreamer.
Share
Share
Copy Link
An exploration of the differences between AI-generated art and human creativity, highlighting the unique aspects of human consciousness and the challenges AI faces in replicating genuine intelligence.
In recent years, artificial intelligence has made significant strides in various fields, including art creation. A prime example is Refik Anadol's installation "Unsupervised" (2022) at the Museum of Modern Art, which uses AI to "interpret and transform" the museum's collection 1. This installation, featuring ever-changing digital "hallucinations," has sparked discussions about AI's creative capabilities and its potential to surpass human intelligence.
Despite the initial awe inspired by AI art, critics and observers have noted a fundamental emptiness in these creations. Mike Pepi, in his book "Against Platforms: Surviving Digital Utopia," argues that while digital technologies once held immense promise, this utopianism is slipping away 1. The AI-generated art, while visually striking, lacks the depth and intentionality found in human-created works.
The contrast between AI-generated art and human-created art becomes apparent when comparing Anadol's "Unsupervised" with Andrew Wyeth's "Christina's World" (1948). While "Unsupervised" presents an ever-changing array of images, "Christina's World" offers a single, thought-provoking scene that invites deeper contemplation and interpretation 1. This comparison highlights the unique aspects of human creativity, including intent, emotion, and the ability to convey complex narratives.
The debate surrounding AI's capabilities extends beyond art into the realm of consciousness and genuine intelligence. Researchers and philosophers continue to grapple with the concepts of "sentience" and "agency" in machines 2. Anil Seth, a professor of cognitive and computational neuroscience, emphasizes that consciousness is what gives each individual a unique perspective on the world – a quality that machines currently lack 2.
Recent advancements in AI, such as the release of DeepSeek-R1 Large Language Model, have reignited concerns about AI surpassing human intelligence 2. These developments, reminiscent of AlphaGo's famous "Move 37" against champion Go player Lee Seedol in 2016, demonstrate AI's potential to make decisions that surprise even expert humans 2.
To understand the uniqueness of human intelligence, it's crucial to consider our evolutionary history. The development of the human brain, particularly the cerebral cortex, has enabled complex cognitive functions such as reasoning, decision-making, and linguistic fluency 2. This evolutionary process has resulted in not just intelligence, but also consciousness – a quality that remains elusive in artificial systems.
As AI continues to advance, questions about its impact on human society and creativity persist. While AI can process vast amounts of data and generate novel combinations, it still lacks the depth of understanding and emotional resonance that characterize human art and thought. The challenge for the future lies in harnessing AI's capabilities while preserving and valuing the unique aspects of human consciousness and creativity.
As artificial intelligence rapidly advances, experts and policymakers grapple with potential risks and benefits. The debate centers on how to regulate AI development while fostering innovation.
2 Sources
2 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
Exploring the impact of AI on society and personal well-being, from ethical considerations to potential health benefits, as discussed by experts Madhumita Murgia and Arianna Huffington.
2 Sources
2 Sources
An analysis of how our cultural narratives about AI and robots have failed to prepare us for the current reality of AI chatbots, highlighting the tragic case of a teenager's obsession with an AI companion.
2 Sources
2 Sources
A comprehensive look at the AI landscape in 2024, highlighting key developments, challenges, and future trends in the rapidly evolving field.
8 Sources
8 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved