4 Sources
[1]
Inside Hollywood's AI Power Struggle: Where Does Human Creativity Go From Here?
Katelyn is a writer with CNET covering social media, AI and online services. She graduated from the University of North Carolina at Chapel Hill with a degree in media and journalism. You can often find her with a novel and an iced coffee during her time off. Film festival directors Peter Isaac Alexander and Marisa Cohen said that when they first saw the film, they thought it was interesting, creative and unlike anything their review committee had seen before. The flick, a historical movie about an artist from an Italian director, met all the necessary criteria to be screened in front of a live audience and so it was, during last year's Portland Festival of Cinema, Animation and Technology. But after the film was over, some members of the audience started loudly booing. The reason? A disclosure in the credits that read the film was "a blend of artificial intelligence and human creativity." Out of the 180 or so films screened at last year's festival, only a few had generative AI elements -- many submissions didn't make the cut because it was clear that AI had been used to create the whole movie, which the festival doesn't allow. Despite Alexander and Cohen's personal reservations and serious concerns around generative AI, they know AI has become a popular tool for moviemakers. "It's hard to know what to do as a film festival director, because we want to be fair. We want to show interesting art. We want people to see what tools are available that they could use," Cohen said in an interview with CNET. "Some filmmakers don't have enough money to buy fancy software [or] have a team of animators, and if they want to tell their story, should they use AI?" This incident highlights how increasingly common generative AI is becoming in the creation of movies, despite AI provoking widespread fears and frustration about future job security, potential theft and the diminishment of human creativity and its intrinsic value. It has been two and a half years since ChatGPT exploded in popularity and set off a new race among tech companies to develop the most advanced generative AI. Like nearly every online service, creative software programs got major AI makeovers, including everything from Photoshop to video editors. AI image generators took off, needing only a simple text description called a prompt to create artistic visions ranging from worthy efforts to unmitigated slop. Despite the near ubiquity of AI in artistic computer programs, there is an intense power struggle raging behind the scenes. While some people brag about AI optimizing creation, others decry the tech as the end of human creativity. Nowhere is this struggle more evident than in the entertainment industry. The story of AI in Hollywood is less of the traditional "good versus evil" comic book story and more of a complicated, truly tangled mess. Some studios and networks are all-in on AI. Others have serious legal concerns. Unions -- which protect hundreds of thousands of entertainment workers -- have tried to guide the implementation of AI on sets, with tales of success varying depending on who you ask. Creators of all kinds, from writers to actors to visual effects artists, have been ringing alarm bells over the development and deployment of AI since the tech started rapidly expanding a few years ago. The entertainment business has always been an ultracompetitive industry. But the industry in 2025 is a different beast, thanks to rising costs that are sending productions overseas and creating a job market that's "in crisis." AI is touted both as the solution to these woes and the very thing that threatens to make these problems permanent. Every decision that entertainment leaders make today sets the foundation for how AI will affect the next generation of films and the people behind them. Studios, streamers and organizations like the Motion Picture Academy, Television Academy and labor unions are all exploring their options. For the rest of us, the power, money and influence of Hollywood means that those decisions about AI will undoubtedly have seismic consequences for every creative industry and creator going forward. It will also set a standard for what's normal and an acceptable amount of gen AI in movies and TV shows, which affects all of us as viewers. This is what you need to know to untangle the web of the biggest factors influencing Hollywood's experience and attitudes toward AI. Computer-generated imagery isn't new. What makes generative AI different is that anyone can use it to make a lot of content very quickly. Old barriers, whether it be money, education or practical skill, are eroding as AI makes it easier and cheaper than ever to create digital content. The latest wave in this evolution is AI video generators, which create video clips using text-to-video and image-to-video technology. Most major tech companies and a number of AI startups have announced or released some version of an AI video model. OpenAI, the company behind ChatGPT, released Sora at the end of 2024, followed by Adobe's Firefly and Google's Veo models. Each model has its own quirks, but in general, they all produce AI video clips between 5 and 10 seconds long. The next step for these companies will be focusing on creating longer and higher-resolution videos. Both of those upgrades will prove critical in determining whether AI video generators can be useful enough for professionals. Even pushing AI videos up to 30 seconds long, Alexander told me, would help "cover pretty much most of what you see in modern filmmaking," in terms of scene length. Only one video generator is able to produce audio, Google's Veo 3, but even that addition is new and often clunky. None of others can create audio natively in these clips, which is another thing making AI video models less useful for professionals. Not all generative AI tools are for wholesale creation. AI has also accelerated the evolution of video editing software. Adobe's Premiere Pro, considered one of the main professional video editing programs, got its first AI-powered tool, called generative extend, in April. Traditional editing software that can remove objects and de-age actors can also now incorporate some level of generative AI. This generative editing further blurs the line between what content is human-generated, traditionally retouched and AI-generated. As AI development races along, the tools get better -- fewer incidents of 12-fingered people or weird hallucinations. Today's limitations could be removed in the near future, making it more likely for AI to infiltrate editing and post-production processes. Despite technical limitations, many entertainment leaders are investigating how they can take advantage of the new AI tech. There are multiple motivations behind the entertainment industry's interest in AI. The most obvious is that studios and networks are hoping it will save them money. Renowned director James Cameron (of Titanic and Avatar fame) said on Meta CTO Andrew Bozwell's podcast in April that to continue producing VFX-heavy films, "We got to figure out how to cut the cost of that in half." He quickly added that he's not talking about laying off half the people who work on those projects, but instead using generative AI to speed up the process for those workers. An expert in creating CGI and VFX-heavy movies, Cameron joined the board of directors at Stability AI, an AI creative software company, in September 2024. Speeding along production is surely a concern on the big-budget projects like those Cameron leads, both for the crews working on them and for the viewers who are too used to waiting years for the next season of Stranger Things or Bridgerton. But for smaller productions -- especially for amateurs -- AI is already being used for efficiency and cost savings. Netflix's co-CEO Ted Sarandos said on an earnings call after Cameron's podcast appearance that he hopes AI can "make movies 10% better," not just cheaper. And that's certainly what some pro-AI celebrities are hoping for. Natasha Lyonne just announced that her sci-fi directorial debut will partner with an AI production studio she co-founded called Asteria, which uses so-called 'clean' AI models. Horror studio Blumhouse participated in a pilot program for Meta's AI video project Movie Gen. Ben Affleck has been vocal in the past about embracing AI in future movie-making to reduce the "more laborious, less creative and more costly aspects of filmmaking." One of the most notable recent cases of AI being used in moviemaking came up this past awards season. Adrien Brody won an Oscar for his work in The Brutalist, but the film came under fire when the movie's editor, Dávid Jancsó, revealed that gen AI voice tech was used to improve Brody's and his co-star Felicity Jones's Hungarian dialogue. Brody isn't a native Hungarian speaker, so an AI program called Respeecher was used to refine specific pronunciations. But it was also about saving time and money, according to Jancsó. The backlash was instant and intense. The Academy of Motion Picture Arts and Sciences, the organization behind the storied award show, later came out and clarified that AI usage would "neither help nor harm" a movie's chances of winning. The organizations behind the Emmys, the TV show-focused award show, said AI-edited submissions will be judged on a case-by-case basis. And we'll certainly see more AI usage in at least a few future blockbusters, thanks to the biggest current collaboration between AI companies and studios. An AI video company called Runway and Lionsgate, the studio behind blockbuster films like the John Wick series and TV shows like Mad Men, have teamed up. The deal gives Runway access to Lionsgate's catalog -- all its movies and TV shows -- to create custom, proprietary AI models that can be used however the studio sees fit. Lionsgate filmmakers are reportedly already using the new AI, according to the company's motion picture chair, Adam Fogelson, in a 2024 earnings call. It's a one-of-a-kind deal, Rob Rosenberg, former general counsel at Showtime Network and an IP lawyer, said in an interview with CNET. "I guarantee you, everybody's kicking the tires [on AI]. Everybody is trying to understand it and figure out, are there benefits to this, in addition to the potential harms," said Rosenberg. "But I do find it very telling that you haven't seen a lot of stories about other studios climbing aboard the way that Lionsgate has." While AI enthusiasts or AI-curious folks are dipping into AI -- or diving into, in Lionsgate's case -- there are a number of big players still hanging back. OpenAI has had a hard time shopping Sora, and its chief operating officer Brad Lightcap recently said the company needs to build "a level of trust" with studios. Studios are wary for good reason, as there are a number of serious concerns that come with generative AI use in entertainment. While some leaders may be hoping to incorporate AI and cut costs, there is a lot of anxiety and apprehension around the actual implementation of AI, specifically the legal and ethical consequences. One of the biggest concerns is around copyright -- specifically, if AI companies are using copyrighted materials to train their models without the author's permission. There are over 30 ongoing copyright-specific lawsuits between AI companies and content creators. You've probably heard of the most notable, including The New York Times v. OpenAI and on the image generator side, a class action lawsuit of artists against Stability AI. These cases allege that AI companies used creator content illegally in the development of models and that AI outputs are too similar and infringe on protected intellectual property. Chris Mammen, an intellectual property lawyer and San Francisco office managing partner at Womble Bond and Dickinson, said in an interview with CNET, "The plaintiffs in all of those cases are concerned that having all of their work used as training data is indeed eroding not only their ability to earn a livelihood, but also the importance and value of their copyrights and other IP rights." (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) While AI companies and publishers duke it out in court, AI companies are free to keep operating as-is. There's a bit of guidance from the US Copyright Office, but there's a lot of debate about how state and federal governments should (or shouldn't) legislate around AI. In all likelihood, the question of AI and copyright will be left to the courts to decide on a case-by-case basis. But the potential of using technology that's built from stolen work is not only legally dicey, it's an ethical breach many creators won't stand for. Protecting IP elements like visual style is also a concern, going hand-in-hand with copyright. For example, many directors spend their careers crafting the looks that define their movies. Think the iconic, angsty blue hue that colors the city of Forks in Twilight. Or literally any movie by Wes Anderson, with his signature colorful style. The visual identities of movies are painstakingly created by teams of directors of photography, lighting and visual effect artists and color grading experts. Feeding all of that content into an AI image or video generator runs the risk of anyone being able to mimic it. This isn't theoretical; it's something we've already seen. When OpenAI launched its native image generator in ChatGPT earlier this year, people started churning out anime-looking images in the style of Studio Ghibli. Studio Ghibli is one of the most popular animation studios, the maker of hits like Spirited Away and My Neighbor Totoro. It was a depressingly ironic trend, as many critics pointed out that the founder of Studio Ghibli, Hayao Miyazaki, had said in a 2017 interview that AI is "an insult to life itself." This is a troubling possibility for studios. "Say you're Lionsgate. You don't want the world that the LLM has been able to create, [like] the John Wick world, to all of a sudden show up in somebody else's storyboard, right?" said Rosenberg. "So I think there's a security issue above all... giving away of your trade secrets, your intellectual property, is really first and foremost in the minds of the studios and networks." Many AI generators have guardrails around creating images of specific people, like celebrities and politicians. But these guardrails can be flimsy, and even if you don't use a director's or actor's name, you can describe the look and feel until the AI content is essentially indistinguishable. Lionsgate's AI models should be exclusive to the company, but it highlights how the same concern hits different for studios and individual creators. Studios need to protect their IP; creators don't want anyone to be able to copy their style. There's also the risk of reputational harm from these uses. For example, if you didn't know about the Ghibli ChatGPT trend, it could appear as though Studio Ghibli made a cartoon of a crying woman being deported, as shown in one AI image shared by the White House's official X/Twitter account. These big-picture concerns help explain why it's been hard for tech companies to sell their AIs to entertainment leaders en masse. As entertainment leaders investigate and begin to implement AI, creators' concerns are elevated by labor unions. While some celebrities have been able to fight back against AI encroaching on their work and likeness, like Scarlett Johansson and Keanu Reeves, the majority of people don't have the resources of a celebrity. That's why union protections are so important when it comes to AI, said Duncan Crabtree-Ireland, SAG-AFTRA national executive director and chief negotiator, in an interview with CNET. AI was a key issue during the 2023 strikes by unions representing writers, screen actors, directors and stage performers. The WGA and SAG-AFTRA contracts that emerged from those strikes outlined specific guidelines around the use of AI. In the SAG-AFTRA contract, one of those protections concern digital replicas, the process of scanning people's faces and bodies so that moviemakers can insert synthetic versions of actors into a scene after it's been filmed. Before the contract was enacted, actors were worried that if they chose to sell their likeness, studios could pay actors once for use of their replicas ad infinitum, which could ultimately limit future job opportunities. Without the guardrails against that set in the contract, that process would be "akin to digital indentured servitude," said Crabtree-Ireland. "We're not trying to stop people from allowing others to create digital replicas of them. We just want people to know what it is they're agreeing to when they agree to it, and that that agreement can't just be perpetual and without boundaries," said Crabtree-Ireland. Union guardrails like the ones around digital replicas are step one of a longer path toward finding an equitable balance between innovation and protecting labor interests. To the dismay of some members, the union isn't trying to outright ban generative AI, Crabtree-Ireland said. "Past history teaches us that unions that just try to block technology, they fail. Technological progress cannot be held back by sheer force of will," said Crabtree-Ireland. Instead, the union wants to keep one hand on the wheel. "We're going to use every bit of leverage, power and persuasion we can bring to channel these things in the right direction, rather than trying to block them," said Crabtree-Ireland. Unions like SAG-AFTRA protect thousands of workers in the entertainment industry. The power they wield can be used to help industry titans navigate new AI, but more importantly, unions can help guide corporations away from abusive, disastrous or straight-up dumb uses of AI. Union contracts can set important precedents. Not everyone who works in entertainment is eligible for union membership, but by raising the bar and setting limits around AI use, unions can still ensure a healthier work environment and stabilize the future of the industry for current and future creators. There's no shortage of hype surrounding AI in Hollywood, though technical limitations, legal uncertainties and ethical concerns have held it back from a full-throttle invasion some technologists might have envisioned. But continued innovation and evolving legal postures might entice studios and networks to start exploring AI more aggressively and more loudly. For Alexander and Cohen, generative AI will continue to be an issue to grapple with on the festival circuit. But for their own work, a sci-fi miniseries called The Cloaked Realm, the duo said they spent thousands of hours over several years hand-drawing and animating the show. "We didn't even really consider [using AI] because we really care about the depth, the nuance, all these things that we feel like come organically with 2D animation," said Cohen. "I think it emotionally hits people at a different level, and then intellectually, also, people appreciate knowing a human created everything." "Human touch can be replicated, but I often wonder, will the feel, the emotion that gets produced in someone, is that going to be replicated?" said Alexander. "You know the old saying, no plan survives contact with the enemy? I wonder when these AI models, even as they get extremely polished and perfected, will touch people's souls the way that something that's created by humans can."
[2]
AI 'hallucinates' constantly, but there's a solution
Down with endless data. (Image credit: Alexander Supertramp via Shutterstock) The main problem with big tech's experiment with artificial intelligence (AI) is not that it could take over humanity. It's that large language models (LLMs) like Open AI's ChatGPT, Google's Gemini and Meta's Llama continue to get things wrong, and the problem is intractable. Known as hallucinations, the most prominent example was perhaps the case of US law professor Jonathan Turley, who was falsely accused of sexual harassment by ChatGPT in 2023. OpenAI's solution seems to have been to basically "disappear" Turley by programming ChatGPT to say it can't respond to questions about him, which is clearly not a fair or satisfactory solution. Trying to solve hallucinations after the event and case by case is clearly not the way to go. The same can be said of LLMs amplifying stereotypes or giving western-centric answers. There's also a total lack of accountability in the face of this widespread misinformation, since it's difficult to ascertain how the LLM reached this conclusion in the first place. We saw a fierce debate about these problems after the 2023 release of GPT-4, the most recent major paradigm in OpenAI's LLM development. Arguably the debate has cooled since then, though without justification. The EU passed its AI Act in record time in 2024, for instance, in a bid to be world leader in overseeing this field. But the act relies heavily on AI companies to regulate themselves without really addressing the issues in question. It hasn't stopped tech companies from releasing LLMs worldwide to hundreds of millions of users and collecting their data without proper scrutiny. Related: 'Foolhardy at best, and deceptive and dangerous at worst': Don't believe the hype -- here's why artificial general intelligence isn't what the billionaires tell you it is Meanwhile, the latest tests indicate that even the most sophisticated LLMs remain unreliable. Despite this, the leading AI companies still resist taking responsibility for errors. Unfortunately LLMs' tendencies to misinform and reproduce bias can't be solved with gradual improvements over time. And with the advent of agentic AI, where users will soon be able to assign projects to an LLM such as, say, booking their holiday or optimising the payment of all their bills each month, the potential for trouble is set to multiply. The emerging field of neurosymbolic AI could solve these issues, while also reducing the enormous amounts of data required for training LLMs. So what is neurosymbolic AI and how does it work? LLMs work using a technique called deep learning, where they are given vast amounts of text data and use advanced statistics to infer patterns that determine what the next word or phrase in any given response should be. Each model -- along with all the patterns it has learned -- is stored in arrays of powerful computers in large data centers known as neural networks. LLMs can appear to reason using a process called chain-of-thought, where they generate multi-step responses that mimic how humans might logically arrive at a conclusion, based on patterns seen in the training data. Undoubtedly, LLMs are a great engineering achievement. They are impressive at summarizing text and translating, and may improve the productivity of those diligent and knowledgeable enough to spot their mistakes. Nevertheless they have great potential to mislead because their conclusions are always based on probabilities -- not understanding. A popular workaround is called "human-in-the-loop": making sure that humans using AIs still make the final decisions. However, apportioning blame to humans does not solve the problem. They'll still often be misled by misinformation. LLMs now need so much training data to advance that we're now having to feed them synthetic data, meaning data created by LLMs. This data can copy and amplify existing errors from its own source data, such that new models inherit the weaknesses of old ones. As a result, the cost of programming AIs to be more accurate after their training -- known as "post-hoc model alignment" -- is skyrocketing. It also becomes increasingly difficult for programmers to see what's going wrong because the number of steps in the model's thought process become ever larger, making it harder and harder to correct for errors. Neurosymbolic AI combines the predictive learning of neural networks with teaching the AI a series of formal rules that humans learn to be able to deliberate more reliably. These include logic rules, like "if a then b", such as "if it's raining then everything outside is normally wet"; mathematical rules, like "if a = b and b = c then a = c"; and the agreed upon meanings of things like words, diagrams and symbols. Some of these will be inputted directly into the AI system, while it will deduce others itself by analyzing its training data and doing "knowledge extraction". This should create an AI that will never hallucinate and will learn faster and smarter by organising its knowledge into clear, reusable parts. For example if the AI has a rule about things being wet outside when it rains, there's no need for it to retain every example of the things that might be wet outside -- the rule can be applied to any new object, even one it has never seen before. During model development, neurosymbolic AI also integrates learning and formal reasoning using a process known as the "neurosymbolic cycle". This involves a partially trained AI extracting rules from its training data then instilling this consolidated knowledge back into the network before further training with data. This is more energy efficient because the AI needn't store as much data, while the AI is more accountable because it's easier for a user to control how it reaches particular conclusions and improves over time. It's also fairer because it can be made to follow pre-existing rules, such as: "For any decision made by the AI, the outcome must not depend on a person's race or gender". The first wave of AI in the 1980s, known as symbolic AI, was actually based on teaching computers formal rules that they could then apply to new information. Deep learning followed as the second wave in the 2010s, and many see neurosymbolic AI as the third. It's easiest to apply neurosymbolic principles to AI in niche areas, because the rules can be clearly defined. So it's no surprise that we've seen it first emerge in Google's AlphaFold, which predicts protein structures to help with drug discovery; and AlphaGeometry, which solves complex geometry problems. For more broad-based AIs, China's DeepSeek uses a learning technique called "distillation" which is a step in the same direction. But to make neurosymbolic AI fully feasible for general models, there still needs to be more research to refine their ability to discern general rules and perform knowledge extraction. It's unclear to what extent LLM makers are working on this already. They certainly sound like they're heading in the direction of trying to teach their models to think more cleverly, but they also seem wedded to the need to scale up with ever larger amounts of data. The reality is that if AI is going to keep advancing, we will need systems that adapt to novelty from only a few examples, that check their understanding, that can multitask and reuse knowledge to improve data efficiency and that can reason reliably in sophisticated ways. This way, well designed digital technology could potentially even offer an alternative to regulation, because the checks and balances would be built into the architecture and perhaps standardized across the industry. There's a long way to go, but at least there's a path ahead.
[3]
Artificial Intelligence Is Not Intelligent
Despite what tech CEOs might say, large language models are not smart in any recognizably human sense of the word. On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed "Cellarius," it warned of an encroaching "mechanical kingdom" that would soon bring humanity to its yoke. "The machines are gaining ground upon us," the author ranted, distressed by the breakneck pace of industrialization and technological development. "Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life." We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language. Today, Butler's "mechanical kingdom" is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book -- The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna -- in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam. To call AI a con isn't to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking -- and, soon, feeling -- machines. Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." Read: What 'Silicon Valley' knew about tech-bro paternalism These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate -- understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions. Few phenomena demonstrate the perils that can accompany AI illiteracy as well as "Chatgpt induced psychosis," the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot they're interacting with is a god -- "ChatGPT Jesus," as a man whose wife fell prey to LLM-inspired delusions put it -- while others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner "spiral starchild" and "river walker" in interactions that moved him to tears. "He started telling me he made his AI self-aware," she said, "and that it was teaching him how to talk to God, or sometimes that the bot was God -- and then that he himself was God." Although we can't know the state of these people's minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed." Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist -- it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, "In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised." The fact that the very point of friendship is that it is not personalized -- that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization -- does not seem to occur to him. Read: Life really is better without the internet This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI "dating concierge" that will interact with other users' concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for "AI girlfriends." Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industry's "tradition of anthropomorphizing": talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding -- in theory -- only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the world's poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao's reporting, OpenAI tasked with sorting through posts describing horrifying acts ("parents raping their children, kids having sex with animals") to help improve ChatGPT. "These two features of technology revolutions -- their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable," Hao writes, "are perhaps truer than ever for the moment we now find ourselves in with artificial intelligence." The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was "talking to him as if he is the next messiah" only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should -- and should not -- replace, they may be spared its worst consequences.
[4]
'Nobody wants a robot to read them a story!' The creatives and academics rejecting AI - at work and at home
Is artificial intelligence coming for everyone's jobs? Not if this lot have anything to do with it The novelist Ewan Morrison was alarmed, though amused, to discover he had written a book called Nine Inches Pleases a Lady. Intrigued by the limits of generative artificial intelligence (AI), he had asked ChatGPT to give him the names of the 12 novels he had written. "I've only written nine," he says. "Always eager to please, it decided to invent three." The "nine inches" from the fake title it hallucinated was stolen from a filthy Robert Burns poem. "I just distrust these systems when it comes to truth," says Morrison. He is yet to write Nine Inches - "or its sequel, Eighteen Inches", he laughs. His actual latest book, For Emma, imagining AI brain-implant chips, is about the human costs of technology. Morrison keeps an eye on the machines, such as OpenAI's ChatGPT, and their capabilities, but he refuses to use them in his own life and work. He is one of a growing number of people who are actively resisting: people who are terrified of the power of generative AI and its potential for harm and don't want to feed the beast; those who have just decided that it's a bit rubbish, and more trouble than it's worth; and those who simply prefer humans to robots. Go online, and it's easy to find AI proponents who dismiss refuseniks as ignorant luddites - or worse, smug hipsters. I possibly fall into both camps, given that I have decidedly Amish interests (board games, gardening, animal husbandry) and write for the Guardian. Friends swear by ChatGPT for parenting advice, and I know someone who uses it all day for work in her consultancy business, but I haven't used it since playing around after it launched in 2022. Admittedly ChatGPT might have done a better job, but this piece was handcrafted using organic words from my artisanal writing studio. (OK, I mean bed.) I could have assumed my interviewees' thoughts from plundering their social media posts and research papers, as ChatGPT would have done, but it was far more enjoyable to pick up the phone and talk, human to human. Two of my interviewees were interrupted by their pets, and each made me laugh in some way (full disclosure: AI then transcribed the noise). On X, where Morrison sometimes clashes with AI enthusiasts, a common insult is "decel" (decelerationist), but it makes him laugh when people think he's the one who isn't keeping up. "There's nothing [that stops] accelerationism more than failure to deliver on what you promised. Hitting a brick wall is a good way to decelerate," he says. One recent study found that AI answered more than 60% of queries inaccurately. Morrison was drawn into the argument by what he would now call "alarmist fears about the potential for superintelligence and runaway AI. The more I've got into it, the more I realise that's a fiction that's been dangled before the investors of the world, so they'll invest billions - in fact, half a trillion - into this quest for artificial superintelligence. It's a fantasy, a product of venture capital gone nuts." There are also copyright violations - generative AI is trained on existing material - that threaten him as a writer, and his wife, screenwriter Emily Ballou. In the entertainment industry, he says, people are using "AI algorithms to determine what projects get the go-ahead, and that means we're stuck remaking the past. The algorithms say 'More of the same', because it's all they can do." Morrison says he has a long list of complaints. "They've been stacking up over the past few years." He is concerned about the job losses (Bill Gates recently predicted AI would lead to a two-day work week). Then there are "tech addiction, the ecological impact, the damage to the education system - 92% of students are now using AI". He worries about the way tech companies spy on us to make AI personalised, and is horrified at AI-enabled weapons being used in Ukraine. "I find that ethically revolting." Others cite similar reasons for not using AI. April Doty, an audiobook narrator, is appalled at the environmental cost - the computational power required to perform an AI search and answer is huge. "I'm infuriated that you can't turn off the AI overviews in Google search," she says. "Whenever you look anything up now you're basically torching the planet." She has started to use other search engines. "But, more and more, we're surrounded by it, and there's no off switch. That makes me angry." Where she still can, she says, "I'm opting out of using AI." In her own field, she is concerned about the number of books that are being "read" by machines. Audible, the Amazon-owned audiobook provider, has just announced it will allow publishers to create audiobooks using its AI technology. "I don't know anybody who wants a robot to read them a story, but I am concerned that it is going to ruin the experience to the point where people don't want to subscribe to audiobook platforms any more," says Doty. She hasn't lost jobs to AI yet but other colleagues have, and chances are, it will happen. AI models can't "narrate", she says. "Narrators don't just read words; they sense and express the feelings beneath the words. AI can never do this job because it requires decades of experience in being a human being." Emily M Bender, professor of linguistics at the University of Washington and co-author of a new book, The AI Con, has many reasons why she doesn't want to use large language models (LLMs) such as ChatGPT. "But maybe the first one is that I'm not interested in reading something that nobody wrote," she says. "I read because I want to understand how somebody sees something, and there's no 'somebody' inside the synthetic text-extruding machines." It's just a collage made from lots of different people's words, she says. Does she feel she is being "left behind", as AI enthusiasts would say? "No, not at all. My reaction to that is, 'Where's everybody going?'" She laughs as if to say: nowhere good. "When we turn to synthetic media rather than authentic media, we are losing out on human connection," says Bender. "That's both at a personal level - what we get out of connecting to other people - and in terms of strength of community." She cites Chris Gilliard, the surveillance and privacy researcher. "He made the very important point that you can see this as a technological move by the companies to isolate us from each other, and to set things up so that all of our interactions are mediated through their products. We don't need that, for us or our communities." Despite Bender's well-publicised position - she has long been a high-profile critic of LLMs - incredibly, she has seen students turn in AI-generated work. "That's very sad." She doesn't want to be policing, or even blaming, students. "My job is to make sure students understand why it is that turning to a large language model is depriving themselves of a learning opportunity, in terms of what they would get out of doing the work." Does she think people should boycott generative AI? "Boycott suggests organised political action, and sure, why not?" she says. "I also think that people are individually better off if they don't use them." Some people have so far held out, but are reluctantly realising they may end up using it. Tom, who works in IT for the government, doesn't use AI in his tech work, but found colleagues were using it in other ways. Promotion is partly decided on annual appraisals they have to write, and he had asked a manager whose appraisal had impressed him how he'd done it, thinking he'd spent days on it. "He said, 'I just spent 10 minutes - I used ChatGPT,'" Tom recalls. "He suggested I should do the same, which I don't agree with. I made that point, and he said, 'Well, you're probably not going to get anywhere unless you do.'" Using AI would feel like cheating, but Tom worries refusing to do so now puts him at a disadvantage. "I almost feel like I have no choice but to use it at this point. I might have to put morals aside." Others, despite their misgivings, limit how they use it, and only for specific tasks. Steve Royle, professor of cell biology at the University of Warwick, uses ChatGPT for the "grunt work" of writing computer code to analyse data. "But that's really the limit. I don't want it to generate code from scratch. When you let it do that, you spend way more time debugging it afterwards. My view is, it's a waste of time if you let it try and do too much for you." Accurate or not, he also worries that if he becomes too reliant on AI, his coding skills will atrophy. "The AI enthusiasts say, 'Don't worry, eventually nobody will need to know anything.' I don't subscribe to that." Part of his job is to write research papers and grant proposals. "I absolutely will not use it for generating any text," says Royle. "For me, in the process of writing, you formulate your ideas, and by rewriting and editing, it really crystallises what you want to say. Having a machine do that is not what it's about." Generative AI, says film-maker and writer Justine Bateman, "is one of the worst ideas society has ever come up with". She says she despises how it incapacitates us. "They're trying to convince people they can't do the things they've been doing easily for years - to write emails, to write a presentation. Your daughter wants you to make up a bedtime story about puppies - to write that for you." We will get to the point, she says with a grim laugh, "that you will essentially become just a skin bag of organs and bones, nothing else. You won't know anything and you will be told repeatedly that you can't do it, which is the opposite of what life has to offer. Capitulating all kinds of decisions like where to go on vacation, what to wear today, who to date, what to eat. People are already doing this. You won't have to process grief, because you'll have uploaded photos and voice messages from your mother who just died, and then she can talk to you via AI video call every day. One of the ways it's going to destroy humans, long before there's a nuclear disaster, is going to be the emotional hollowing-out of people." She is not interested. "It is the complete opposite direction of where I'm going as a film-maker and author. Generative AI is like a blender - you put in millions of examples of the type of thing you want and it will give you a Frankenstein spoonful of it." It's theft, she says, and regurgitation. "Nothing original will come out of it, by the nature of what it is. Anyone who uses generative AI, who thinks they're an artist, is stopping their creativity." Some studios, such as the animation company Studio Ghibli, have sworn off using AI, but others appear to be salivating at the prospect. In 2023, Dreamworks founder Jeffrey Katzenberg said AI would cut the costs of its animated films by 90%. Bateman thinks audiences will tire of AI-created content. "Human beings will react to this in the way they react to junk food," she says. Deliciously artificial to some, if not nourishing - but many of us will turn off. Last year she set up an organisation, Credo 23, and a film festival, to showcase films made without AI. She likens it to an "organic stamp for films, that tells the audience no AI was used." People, she says, will "hunger for something raw, real and human". In everyday life, Bateman is trying "to be in a parallel universe, where I'm trying to avoid [AI] as much as possible." It's not that she is anti-tech, she stresses. "I have a computer science degree, I love tech. I love salt, too, but I don't put it on everything." In fact, everyone I speak to is a technophile in some way. Doty describes herself as "very tech-forward", but she adds that she values human connection, which AI is threatening. "We keep moving like zombies towards a world that nobody really wants to live in." Royle codes and runs servers, but also describes himself as a "conscientious AI objector". Bender specialises in computational linguistics and was named by Time as one of the top 100 people in AI in 2023. "I am a technologist," she says, "but I believe that technology should be built by communities for their own purposes, rather than by large corporations for theirs." She also adds, with a laugh: "The Luddites were awesome! I would wear that badge with pride." Morrison, too, says: "I quite like the Luddites - people standing up to protect the jobs that keep their families and their communities alive."
Share
Copy Link
A comprehensive look at the ongoing debate surrounding AI in creative industries, its impact on jobs, and the ethical concerns raised by its rapid advancement and implementation.
The entertainment industry is experiencing a significant shift as artificial intelligence (AI) becomes increasingly prevalent in creative processes. At the Portland Festival of Cinema, Animation and Technology, a film that blended AI and human creativity sparked controversy, highlighting the growing tension between technological advancement and traditional artistic methods 1.
Source: CNET
Film festival directors Peter Isaac Alexander and Marisa Cohen face a dilemma: balancing the desire to showcase innovative art while addressing concerns about AI's role in filmmaking. This struggle reflects a broader debate within the industry about the future of human creativity in an AI-dominated landscape 1.
The rapid development of AI tools, particularly in video generation, is transforming the creative process. Companies like OpenAI, Adobe, and Google have released AI video models capable of producing short clips from text or image inputs. While these tools are still limited in duration and quality, they represent a significant step towards more advanced AI-generated content 1.
Despite the potential benefits, AI integration in creative fields raises several concerns:
Job security: Many in the entertainment industry fear that AI could replace human workers, particularly in roles that involve repetitive tasks or can be automated 1.
Ethical considerations: The use of AI in content creation raises questions about copyright, originality, and the value of human creativity 13.
Accuracy and reliability: Large language models (LLMs) like ChatGPT are prone to "hallucinations" or generating false information, which can be problematic in various applications 2.
Source: Live Science
Researchers are exploring neurosymbolic AI as a potential solution to the limitations of current AI systems. This approach combines neural networks with formal reasoning rules, aiming to create more reliable and accountable AI systems. Benefits of neurosymbolic AI include:
As AI becomes more prevalent, a growing number of creatives and academics are actively resisting its use in their work and personal lives. Their reasons include:
Distrust in AI-generated content: Novelist Ewan Morrison highlights the inaccuracies and potential for misinformation in AI-generated text 4.
Copyright concerns: Many creators worry about AI systems being trained on copyrighted material without proper attribution or compensation 4.
Job displacement: There are fears that AI could lead to significant job losses across various industries 4.
Environmental impact: The computational power required for AI systems has a substantial ecological footprint 4.
Preference for human creativity: Some, like audiobook narrator April Doty, argue that AI cannot replicate the nuanced understanding and expression that humans bring to creative work 4.
Source: The Atlantic
As the debate continues, the entertainment industry and other creative fields must navigate the complex landscape of AI integration. While some embrace the technology for its potential to enhance productivity and creative possibilities, others remain skeptical about its impact on human creativity and employment 134.
The decisions made by industry leaders, policymakers, and creators today will shape the future of AI in creative fields, influencing everything from content production to audience expectations. As the technology continues to evolve, finding a balance between innovation and preserving the value of human creativity remains a critical challenge for the industry 134.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
3 hrs ago
9 Sources
Technology
3 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
11 hrs ago
3 Sources
Health
11 hrs ago