Curated by THEOUTPOST
On Thu, 3 Oct, 12:03 AM UTC
2 Sources
[1]
London Standard's AI-generated review, by late art critic Brian Sewell, exposes a significant philosophical threat
University of East Anglia provides funding as a member of The Conversation UK. For the first issue in its new weekly print edition, the London Standard has run an experiment in the form of an AI-generated review of the National Gallery's Van Gogh: Poets and Lovers exhibition, written in the style of late art critic Brian Sewell. Experiments of this nature typically aim to link real-world observations to general theories. For experiments to successfully confirm or reject big ideas, they need a clear design and purpose. Little has been shared about the design of the Standard's experiment and details about the training data and algorithms used are unclear. But what about the purpose? Is this a tentative exploration of the role of the art critic? Does it aim to kindle a broader social dialogue about those human jobs that are replaceable and those that are not? Is it an ethical experiment on how technology might help us deal with the loss of cherished human lives? Or just another Turing test to judge how far from human intelligence AI still is? If the aim can't be determined, perhaps the review can be seen as "experimental" in a different sense - as allowing a tentative exploration of a more philosophical question about the ways in which humans can be reduced to machines. This article is part of our State of the Arts series. These articles tackle the challenges of the arts and heritage industry - and celebrate the wins, too. One possibility is that this new technology might lead to what Israeli academic and author Yuval Noah Harari has called "de-individuation". Because so much about ourselves as human beings - what we think, what we believe, what we love or whom we love - can be reduced to AI data points, we are in a sense broken apart or fragmented by it. As such, training an AI system on Sewell's collected writing splinters the person he once was. Critics have scoffed at the AI-written review, deeming it a pale copy that fails to capture "the waspishness and hauteur of Sewell's writing". But this view obscures a greater realisation of the philosophical threat this technology poses - the reduction of the human to the machine. What philosophers say about this threat The philosopher Hannah Arendt offered a chilling argument against such reductionism in her 1958 book The Human Condition. She warned of a world where powerful computing machines seem to approach independent thinking and consciousness. However, she argued that whether this could count as thought depends on whether we are willing to reduce our own thinking to mere calculation and computation. Arendt believed that we can and should resist such a reduction, as humans have diverse ways of engaging with the world. In The Human Condition, she distinguishes between what she calls "labour", "work" and "action". If labour is natural and work is artificial, for Arendt, action is closer to the sphere of unconstrained human creativity. "Action" is what people do when they use language to tell the stories of their lives. It is a form of communication: by means of language we are able to articulate the meaning of our actions and to coordinate our actions with those of others, different from us. But Arendt worried that this kind of creative, human interchange through language and storytelling, might be reduced to mechanical construction - to something artificial. She also insisted that if the telling of the story needs the individual to take a stance and act in the world, its persistence depends on there being other people to hear and to retell it, perhaps in different forms. It depends, to a certain extent, on trust. But this trust bond is threatened if human action is reduced to what can be produced by a machine. Another philosopher, closer to our times, who worried about the erosion of trust brought about by the wide, unreflective development and adoption of AI was Daniel Dennett, who died earlier this year. In his most alarmist moments, Dennett argued that the most pressing problem is not that AI will take jobs or change warfare, but that they are going to destroy human trust. Even if large language models (AI systems that are capable of understanding and generating human language by processing vast amounts of text data) will never think like humans think, even if they won't be able to tell their own stories, there is still the very real possibility, according to Dennett, that they will move us into a world where we won't be able to tell truth from falsehood - where we won't know who to trust. And that is a scary thought experiment that the Standard might (unintentionally) have brought to our attention. Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
[2]
London Standard's AI-generated review, by late art critic Brian Sewell, exposes a significant philosophical threat
For the first issue in its new weekly print edition, the London Standard has run an experiment in the form of an AI-generated review of the National Gallery's Van Gogh: Poets and Lovers exhibition, written in the style of late art critic Brian Sewell. Experiments of this nature typically aim to link real-world observations to general theories. For experiments to successfully confirm or reject big ideas, they need a clear design and purpose. Little has been shared about the design of the Standard's experiment and details about the training data and algorithms used are unclear. But what about the purpose? Is this a tentative exploration of the role of the art critic? Does it aim to kindle a broader social dialogue about those human jobs that are replaceable and those that are not? Is it an ethical experiment on how technology might help us deal with the loss of cherished human lives? Or just another Turing test to judge how far from human intelligence AI still is? If the aim can't be determined, perhaps the review can be seen as "experimental" in a different sense -- as allowing a tentative exploration of a more philosophical question about the ways in which humans can be reduced to machines. This article is part of our State of the Arts series. These articles tackle the challenges of the arts and heritage industry -- and celebrate the wins, too. One possibility is that this new technology might lead to what Israeli academic and author Yuval Noah Harari has called "de-individuation." Because so much about ourselves as human beings -- what we think, what we believe, what we love or whom we love -- can be reduced to AI data points, we are in a sense broken apart or fragmented by it. As such, training an AI system on Sewell's collected writing splinters the person he once was. Critics have scoffed at the AI-written review, deeming it a pale copy that fails to capture "the waspishness and hauteur of Sewell's writing." But this view obscures a greater realization of the philosophical threat this technology poses -- the reduction of the human to the machine. What philosophers say about this threat The philosopher Hannah Arendt offered a chilling argument against such reductionism in her 1958 book The Human Condition. She warned of a world where powerful computing machines seem to approach independent thinking and consciousness. However, she argued that whether this could count as thought depends on whether we are willing to reduce our own thinking to mere calculation and computation. Arendt believed that we can and should resist such a reduction, as humans have diverse ways of engaging with the world. In The Human Condition, she distinguishes between what she calls "labor," "work" and "action." If labor is natural and work is artificial, for Arendt, action is closer to the sphere of unconstrained human creativity. "Action" is what people do when they use language to tell the stories of their lives. It is a form of communication: by means of language we are able to articulate the meaning of our actions and to coordinate our actions with those of others, different from us. But Arendt worried that this kind of creative, human interchange through language and storytelling, might be reduced to mechanical construction -- to something artificial. She also insisted that if the telling of the story needs the individual to take a stance and act in the world, its persistence depends on there being other people to hear and to retell it, perhaps in different forms. It depends, to a certain extent, on trust. But this trust bond is threatened if human action is reduced to what can be produced by a machine. Another philosopher, closer to our times, who worried about the erosion of trust brought about by the wide, unreflective development and adoption of AI was Daniel Dennett, who died earlier this year. In his most alarmist moments, Dennett argued that the most pressing problem is not that AI will take jobs or change warfare, but that they are going to destroy human trust. Even if large language models (AI systems that are capable of understanding and generating human language by processing vast amounts of text data) will never think like humans think, even if they won't be able to tell their own stories, there is still the very real possibility, according to Dennett, that they will move us into a world where we won't be able to tell truth from falsehood -- where we won't know who to trust. And that is a scary thought experiment that the Standard might (unintentionally) have brought to our attention.
Share
Share
Copy Link
The London Evening Standard's publication of an AI-generated art review, mimicking the late critic Brian Sewell, sparks debate on AI ethics and the future of journalism.
The London Evening Standard, a prominent UK newspaper, recently stirred controversy by publishing an art review purportedly written by the late critic Brian Sewell. However, the review was actually generated by artificial intelligence (AI), raising significant ethical questions about the use of AI in journalism and the impersonation of deceased individuals 1.
The AI-generated review critiqued an exhibition by artist Jamian Juliano-Villani at the Gagosian gallery in London. The piece was crafted to mimic Sewell's distinctive writing style, complete with his characteristic acerbic tone and cultural references. This attempt at replicating a deceased critic's voice has ignited a debate about the boundaries of AI use in media and the ethical implications of such practices 2.
The publication of this AI-generated review has exposed several ethical concerns:
Impersonation of the deceased: The use of AI to mimic a deceased person's writing style raises questions about respect for the dead and the potential misuse of their legacy 1.
Authenticity in journalism: The incident challenges the notion of authenticity in media, potentially eroding public trust in journalistic integrity 2.
Philosophical implications: The ability of AI to replicate human writing styles so convincingly poses deeper questions about the nature of creativity, authorship, and the value of human-generated content 1.
The publication has drawn criticism from various quarters, including art critics, ethicists, and AI experts. Many have expressed concern about the potential for AI to be used to spread misinformation or to manipulate public opinion by impersonating trusted voices 2.
This incident has sparked a broader discussion about the future of journalism in an age of advanced AI. Questions arise about the role of human writers, the value of original thought, and the potential for AI to both enhance and potentially undermine journalistic practices 1.
The controversy highlights the urgent need for clear ethical guidelines and transparency in the use of AI in media. As AI technology continues to advance, it becomes increasingly important for news organizations to establish protocols for the responsible use of AI-generated content, ensuring that readers are fully informed about the origin and nature of the material they are consuming 2.
The London Evening Standard's plan to use AI to imitate deceased art critic Brian Sewell has ignited discussions about journalism ethics, AI's role in media, and the value of human expertise in art criticism.
3 Sources
3 Sources
The intersection of AI and art gains prominence with Ai-Da's $1 million painting, while a Tate Modern exhibition explores the historical relationship between technology and creativity.
2 Sources
2 Sources
The rise of AI-generated art in the style of Studio Ghibli has ignited a fierce debate about intellectual property, creativity, and the future of human artists in an AI-dominated landscape.
5 Sources
5 Sources
As artificial intelligence rapidly advances, experts and policymakers grapple with potential risks and benefits. The debate centers on how to regulate AI development while fostering innovation.
2 Sources
2 Sources
A new study reveals that while AI-generated stories can match human-written ones in quality, readers show a bias against content they believe is AI-created, even when it's not.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved