3 Sources
[1]
Can a Generative AI Agent Accurately Mimic My Personality?
A large language model interviewed me about my life and gave the information to an AI agent built to portray my personality. Could it convince me it was me? On a gray Sunday morning in March, I told an AI chatbot my life story. Introducing herself as Isabella, she spoke with a friendly female voice that would have been well-suited to a human therapist, were it not for its distinctly mechanical cadence. Aside from that, there wasn't anything humanlike about her; she appeared on my computer screen as a small virtual avatar, like a character from a 1990s video game. For nearly two hours Isabella collected my thoughts on everything from vaccines to emotional coping strategies to policing in the U.S. When the interview was over, a large language model (LLM) processed my responses to create a new artificial intelligence system designed to mimic my behaviors and beliefs -- a kind of digital clone of my personality. A team of computer scientists from Stanford University, Google DeepMind and other institutions developed Isabella and the interview process in an effort to build more lifelike AI systems. Dubbed "generative agents," these systems can simulate the decision-making behavior of individual humans with impressive accuracy. Late last year Isabella interviewed more than 1,000 people. Then the volunteers and their generative agents took the General Social Survey, a biennial questionnaire that has cataloged American public opinion since 1972. Their results were, on average, 85 percent identical, suggesting that the agents can closely predict the attitudes and opinions of their human counterparts. Although the technology is in its infancy, it offers a glimmer of a future in which predictive algorithms can potentially act as online surrogates for each of us. If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. When I first learned about generative agents the humanist in me rebelled, silently insisting that there was something about me that isn't reducible to the 1's and 0's of computer code. Then again, maybe I was naive. The rapid evolution of AI has brought many humbling surprises. Time and again, machines have outperformed us in skills we once believed to be unique to human intelligence -- from playing chess to writing computer code to diagnosing cancer. Clearly AI can replicate the narrow, problem-solving part of our intellect. But how much of your personality -- a mercurial phenomenon -- is deterministic, a set of probabilities that are no more inscrutable to algorithms than the arrangement of pieces on a chessboard? The question is hotly debated. An encounter with my own generative agent, it seemed to me, could help me to get some answers. The LLMs behind generative agents and chatbots such as ChatGPT, Claude and Gemini are certainly expert imitators. People have fed texts from deceased loved ones to ChatGPT, which could then conduct text conversations that closely approximated the departed's voices. Today developers are positioning agents as a more advanced form of chatbot, capable of autonomously making decisions and completing routine tasks, such as navigating a Web browser or debugging computer code. They're also marketing agents as productivity boosters, onto which businesses can offload time-intensive human drudgery. Amazon, OpenAI, Anthropic, Google, Salesforce, Microsoft, Perplexity and virtually every major tech player has jumped onboard the agent bandwagon. Joon Sung Park, a leader of Stanford's generative agent work, had always been drawn to what early Disney animators called "the illusion of life." He began his doctoral work at Stanford in late 2020, after the COVID pandemic was forcing much of the world into lockdown, and as generative AI was starting to boom. Three years earlier, Google researchers introduced the transformer, a type of neural network that can analyze and reproduce mathematical patterns in text. (The "GPT" in ChatGPT stands for "generative pretrained transformer.") Park knew that video game designers had long struggled to create lifelike characters that could do more than move mechanically and read from a script. He wondered: Could generative AI create authentically humanlike behavior in virtual characters? He unveiled generative agents in a 2023 conference paper in which he described them as "interactive simulacra of human behavior." They were built atop ChatGPT and integrated with an "agent architecture," a layer of code allowing them to remember information and formulate plans. The design simulates some key aspects of human perception and behavior, says Daniel Cervone, a professor of psychology specializing in personality theory at the University of Illinois Chicago. Generative agents are doing "a big slice of what a real person does, which is to reflect on their experiences, abstract out beliefs about themselves, store those beliefs and use them as cognitive tools to interpret the world," Cervone told me. "That's what we do all the time." Park dropped 25 generative agents inside Smallville, a virtual space modeled on Swarthmore College, where he had studied as an undergraduate. He included basic affordances such as a cafΓ© and a bar where the agents could mingle; picture The Sims without a human player calling the shots. Smallville was a petri dish for virtual sociality; rather than watching cells multiply, Park observed the agents gradually coalescing from individual nodes into a unified network. At one point, Isabella (the same agent that would later interview me), assigned with the role of cafΓ© owner, spontaneously began handing out invitations to her fellow agents for a Valentine's Day party. "That starts to spark some real signals that this could actually work," Park told me. Yet as encouraging as those early results were, the residents of Smallville had been programmed with particular personality traits. The real test, Park believed, would lie in building generative agents that could simulate the personalities of living humans. It was a tall order. Personality is a notoriously nebulous concept, fraught with hidden layers. The word itself is rooted in uncertainty, vagary, deception: it's derived from the Latin persona, which originally referred to a mask worn by a stage actor. Park and his team don't claim to have built perfect simulations of individuals' personalities. "A two-hour interview doesn't [capture] you in anything near your entirety," says Michael Bernstein, an associate professor of computer science at Stanford and one of Park's collaborators. "It does seem to be enough to gather a sense of your attitudes."And they don't think generative agents are close to artificial general intelligence, or AGI -- an as-yet-theoretical system that can match humans on any cognitive task. In their latest paper, Park and his colleagues argue that their agents could help researchers understand complex, real-world social phenomena, such as the spread of online misinformation and the outcome of national elections. If they can accurately simulate individuals, then they can theoretically set the simulations loose to interact with one another and see what kind of social behaviors emerge. Think Smallville on a much bigger scale. Yet, as I would soon discover, generative agents may only be able to imitate a very narrow and simplified slice of the human personality. Meeting my generative agent a week after my interview with Isabella felt like looking at myself in a funhouse mirror: I knew I was seeing my own reflection, but the image was warped and twisted. The first thing I noticed was that the agent -- let's say "he" -- didn't speak like me. I was on a video call with Park, and the two of us were taking turns asking him questions. Unlike Isabella, he didn't come with his own avatar; he just appeared as faceless lines of green text spilling across my screen. We were testing his ability to make informed guesses about my life, filling in information I hadn't directly provided to Isabella. The results were somewhat disappointing. At one point, I asked him to tell me a secret about himself that no one else knows, hoping he would surface some kind of moderately deep insight. He said he loved astronomy. True enough but hardly revelatory. His real talent seemed to be inferring some of the more mundane details of my life. When asked if his family had dogs growing up, he correctly answered yes, even though I had only told Isabella that my sister and parents have dogs today. I had, however, described my childhood in Colorado, which was full of family camping trips in the mountains, and the agent had apparently correctly deduced that there was a high probability that somebody who grew up in such an environment also grew up with dogs. "Those are the basic boundaries of this technology," Park told me. "In the absence of the ground-truth information, it will try to make its best guess." At times those guesses were comically off the mark. When asked to recount an embarrassing moment from his past, he fabricated a story about a party in college in which he'd lost his balance while dancing and fell onto a table full of snacks. Thankfully, that never happened to the real me. Here, too, the system was connecting disparate bits of my personal data and doing its best to concoct something plausible. Toward the end of the Q&A, I ventured into the deep end by asking whether he believed in God. The answer wasn't far from how I might respond. "I wasn't raised in a specific religious framework, but I've always felt a connection with something greater than myself, especially when I'm in nature," my agent wrote. "It's like there's a kind of transcendent energy or consciousness that we can tap into." Most memorably, there were moments that felt like genuine insight. I had had a long, stressful few months. I had started working as a full-time freelance writer last October, ushering in a period of professional freedom but also chronic uncertainty. I'd spent much of the winter hunkered down, reading and writing, in my small New York City apartment, and I was feeling the psychological effects of prolonged isolation. Embracing the spirit of spring, I had resolved to make what I believed would be some healthy changes, starting with a greater effort to maintain a better work-life balance. I asked my generative agent, "What's some advice you'd want to offer to your past self?" "Embrace uncertainty a bit more," he began. I had to pause. Already the response so closely echoed the themes I'd been articulating to myself in my journal the previous day, it was almost as if I were writing the agents' words myself. "I think I spent a lot of time worrying about the future and trying to control every aspect of my life, and that often led to unnecessary stress," he continued. "I would advise myself to trust the process and be more open to the unexpected paths that life can take.... It's easy to get caught up in career ambitions, but nurturing relationships and taking time for oneself is equally important." Despite those moments of pleasant surprise, my conversation with my generative agent left me feeling hollow. I felt I had met a two-dimensional version of myself -- all artifice, no depth. It had captured a veneer of my personality, but it was just that: a virtual actor playing a role, wearing my data as a mask. At no point did I get the feeling that I was interacting with a system that truly captured my voice and my thoughts. But that isn't the point. Generative agents don't need to sound like you or understand you in your entirety to be useful, just as psychologists don't need to understand every quirk of your behavior to make broad-stroke diagnoses of your personality type. Adam Green, a neuroscientist at Georgetown University, who studies the impacts of AI on human creativity, believes that that lack of specificity and our growing reliance on a handful of powerful algorithms could filter out much of the color and quirks that make each of us unique. Even the most advanced algorithm will revert to the mean of the dataset on which it's been trained. "That matters," Green says, "because ultimately what you'll have is homogenization." In his view, the expanding ubiquity of predictive AI models is squeezing our culture into a kind of groupthink, in which all our idiosyncrasies slowly but surely become discounted as irrelevant outliers in the data of humanity. After meeting my generative agent, I remembered the feeling I had back when I spoke with Isabella -- my inner voice that had rejected the idea that my personality could be re-created in silicon or, as Meghan O'Gieblyn put it in her book God, Human, Animal, Machine, "that the soul is little more than a data set." I still felt that way. If anything, my conviction had been strengthened. I was also aware that I might be falling prey to the same kind of hubris that once kept early critics of AI from believing that computers could ever compose decent poetry or outmatch humans in chess. But I was willing to take that risk.
[2]
AI Is a Mass-Delusion Event
It is a Monday afternoon in August, and I am on the internet watching a former cable-news anchor interview a dead teenager on Substack. This dead teenager -- Joaquin Oliver, killed in the mass shooting at Marjory Stoneman Douglas High School, in Parkland, Florida -- has been reanimated by generative AI, his voice and dialogue modeled on snippets of his writing and home-video footage. The animations are stiff, the model's speaking cadence is too fast, and in two instances, when it is trying to convey excitement, its pitch rises rapidly, producing a digital shriek. How many people, I wonder, had to agree that this was a good idea to get us to this moment? I feel like I'm losing my mind watching it. Jim Acosta, the former CNN personality who's conducting the interview, appears fully bought-in to the premise, adding to the surreality: He's playing it straight, even though the interactions are so bizarre. Acosta asks simple questions about Oliver's interests and how the teenager died. The chatbot, which was built with the full cooperation of Oliver's parents to advocate for gun control, responds like a press release: "We need to create safe spaces for conversations and connections, making sure everyone feels seen." It offers bromides such as "More kindness and understanding can truly make a difference." On the live chat, I watch viewers struggle to process what they are witnessing, much in the same way I am. "Not sure how I feel about this," one writes. "Oh gosh, this feels so strange," another says. Still another thinks of the family, writing, "This must be so hard." Someone says what I imagine we are all thinking: "He should be here." Read: AI's real hallucination problem The Acosta interview was difficult to process in the precise way that many things in this AI moment are difficult to process. I was grossed out by Acosta for "turning a murdered child into content," as the critic Parker Molloy put it, and angry with the tech companies that now offer a monkey's paw in the form of products that can reanimate the dead. I was alarmed when Oliver's father told Acosta during their follow-up conversation that Oliver "is going to start having followers," suggesting an era of murdered children as influencers. At the same time, I understood the compulsion of Oliver's parents, still processing their profound grief, to do anything in their power to preserve their son's memory and to make meaning out of senseless violence. How could I possibly judge the loss that leads Oliver's mother to talk to the chatbot for hours on end, as his father described to Acosta -- what could I do with the knowledge that she loves hearing the chatbot say "I love you, Mommy" in her dead son's voice? The interview triggered a feeling that has become exceedingly familiar over the past three years. It is the sinking feeling of a societal race toward a future that feels bloodless, hastily conceived, and shruggingly accepted. Are we really doing this? Who thought this was a good idea? In this sense, the Acosta interview is just a product of what feels like a collective delusion. This strange brew of shock, confusion, and ambivalence, I've realized, is the defining emotion of the generative-AI era. Three years into the hype, it seems that one of AI's enduring cultural impacts is to make people feel like they're losing it. During his interview with Acosta, Oliver's father noted that the family has plans to continue developing the bot. "Any other Silicon Valley tech guy will say, 'This is just the beginning of AI,'" he said. "'This is just the beginning of what we're doing.'" Just the beginning. Perhaps you've heard that too. "Welcome to the ChatGPT generation." "The Generative AI Revolution." "A new era for humanity," as Mark Zuckerberg recently put it. It's the moment before the computational big bang -- everything is about to change, we're told; you'll see. God may very well be in the machine. Silicon Valley has invented a new type of mind. This is a moment to rejoice -- to double down. You're a fool if you're not using it at work. It is time to accelerate. How lucky we are to be alive right now! Yes, things are weird. But what do you expect? You are swimming in the primordial soup of machine cognition. There are bound to be growing pains and collateral damage. To live in such interesting times means contending with MechaHitler Grok and drinking from a fire hose of fascist-propaganda slop. It means Grandpa leaving confused Facebook comments under rendered images of Shrimp Jesus or, worse, falling for a flirty AI chatbot. This future likely requires a new social contract. But also: AI revenge porn and "nudify" apps that use AI to undress women and children, and large language models that have devoured the total creative output of humankind. From this morass, we are told, an "artificial general intelligence" will eventually emerge, turbo-charging the human race or, well, maybe destroying it. But look: Every boob with a T-Mobile plan will soon have more raw intelligence in their pocket than has ever existed in the world. Keep the faith. Breathlessness is the modus operandi of those who are building out this technology. The venture capitalist Marc Andreessen is quote-tweeting guys on X bleating out statements such as "Everyone I know believes we have a few years max until the value of labor totally collapses and capital accretes to owners on a runaway loop -- basically marx' worst nightmare/fantasy." How couldn't you go a bit mad if you took them seriously? Indeed, it seems that one of the many offerings of generative AI is a kind of psychosis-as-a-service. If you are genuinely AGI-pilled -- a term for those who believe that machine-born superintelligence is coming, and soon -- the rational response probably involves some combination of building a bunker, quitting your job, and joining the cause. As my colleague Matteo Wong wrote after spending time with people in this cohort earlier this year, politics, the economy, and current events are essentially irrelevant to the true believers. It's hard to care about tariffs or authoritarian encroachment or getting a degree if you believe that the world as we know it is about to change forever. There are maddening effects downstream of this rhetoric. People have been involuntarily committed or had delusional breakdowns after developing relationships with chatbots. These stories have become a cottage industry in themselves, each one suggesting that a mix of obsequious models, their presentation of false information as true, and the tools' ability to mimic human conversation pushes vulnerable users to think they've developed a human relationship with a machine. Subreddits such as r/MyBoyfriendIsAI, in which people describe their relationships with chatbots, may not be representative of most users, but it's hard to browse through the testimonials and not feel that, just a few years into the generative-AI era, these tools have a powerful hold on people who may not understand what it is they're engaging with. As all of this happens, young people are experiencing a phenomenon that the writer Kyla Scanlon calls the "End of Predictable Progress." Broadly, the theory argues that the usual pathways to a stable economic existence are no longer reliable. "You're thinking: These jobs that I rely on to get on the bottom rung of my career ladder are going to be taken away from me" by AI, she recently told the journalist Ezra Klein. "I think that creates an element of fear." The feeling of instability she describes is a hallmark of the generative-AI era. It's not at all clear yet how many entry-level jobs will be claimed by AI, but the messaging from enthusiastic CEOs and corporations certainly sounds dire. In May, Dario Amodei, the CEO of Anthropic, warned that AI could wipe out half of all entry-level white-collar jobs. In June, Salesforce CEO Marc Benioff suggested that up to 50 percent of the company's work was being done by AI. The anxiety around job loss illustrates the fuzziness of this moment. Right now, there are competing theories as to whether AI is having a meaningful effect on employment. But real and perceived impact are different things. A recent Quinnipiac poll found that, "when it comes to their day-to-day life," 44 percent of surveyed Americans believe that AI will do more harm than good. The survey found that Americans believe the technology will cause job loss -- but many workers appeared confident in the security of their own job. Many people simply don't know what conclusions to draw about AI, but it is impossible not to be thinking about it. OpenAI CEO Sam Altman has demonstrated his own uncertainty. In a blog post titled "The Gentle Singularity" published in June, Altman argued that "we are past the event horizon" and are close to building digital superintelligence, and that "in some big sense, ChatGPT is already more powerful than any human who has ever lived." He delivered the classic rhetorical flourishes of AI boosters, arguing that "the 2030s are likely going to be wildly different from any time that has come before." And yet, this post also retreats ever so slightly from the dramatic rhetoric of inevitable "revolution" that he has previously employed. "In the most important ways, the 2030s may not be wildly different," he wrote. "People will still love their families, express their creativity, play games, and swim in lakes" -- a cheeky nod to the endurance of our corporeal form, as a little treat. Altman is a skilled marketer, and the post might simply be a way to signal a friendlier, more palatable future for those who are a little freaked out. But a different way to read the post is to see Altman hedging slightly in the face of potential progress limitations on the technology. Earlier this month, OpenAI released GPT-5, to mixed reviews. Altman had promised "a Ph.D.-level" intelligence on any topic. But early tests of GPT-5 revealed all kinds of anecdotal examples of sloppy answers to queries, including hallucinations, simple-arithmetic errors, and failures in basic reasoning. Some power users who'd become infatuated with previous versions of the software were angry and even bereft by the update. Altman placed particular emphasis on the product's usability and design: Paired with the "Gentle Singularity," GPT-5 seems like an admission that superintelligence is still just a concept. Read: The new ChatGPT resets the AI race And yet, the philosopher role-play continues. Not long before the launch, Altman appeared on the comedian Theo Von's popular podcast. The discussion veered into the thoughtful science-fiction territory that Altman tends to inhabit. At one point, the two had the following exchange: What exactly is a person, listening in their car on the way to the grocery store, to make of conversations like this? Surely, there's a cohort that finds covering the Earth or atmosphere with data centers very exciting. But what about those of us who don't? Altman and lesser personalities in the AI space often talk this way, making extreme, matter-of-fact proclamations about the future and sounding like kids playing a strategy game. This isn't a business plan; it's an idle daydream. Similarly disorienting is the fact that these visions and pontifications are driving change in the real world. Even if you personally don't believe in the hype, you are living in an economy that has reoriented itself around AI. A recent report from The Wall Street Journal estimates that Big Tech's spending on IT infrastructure in 2025 is "acting as a sort of private-sector stimulus program," with the "Magnificent Seven" tech companies -- Meta, Alphabet, Microsoft, Amazon, Apple, Nvidia, and Tesla -- spending more than $100 billion on capital expenditures in the recent months. The flip side of such consolidated investment in one tech sector is a giant economic vulnerability that could lead to a financial crisis. This is the AI era in a nutshell. Squint one way, and you can portray it as the saving grace of the world economy. Look at it more closely, and it's a ticking time bomb lodged in the global financial system. The conversation is always polarized. Keep the faith. It's difficult to deny that generative-AI tools are transformative, insomuch as their adoption has radically altered the economy and the digital world. Social networks and the internet at large have been flooded with AI slop and synthetic text. Spotify and YouTube are filling up with AI-generated songs and videos, some of which get millions of streams. Sometimes this is helpful: A bot artfully summarizes a complex PDF. They are, by most accounts, truly helpful coding tools. Kids use them to build helpful study guides. They're good at saving you time by churning out anemic emails. Also, a health-care chatbot made up fake body parts. The FDA has introduced a generative-AI tool to help fast-track drug and medical-device approvals -- but the tool keeps making up fake studies. To scan the AI headlines is a daily exercise in trying to determine the cost that society is paying for these perceived productivity benefits. For example, with a new Google Gemini-enabled smartwatch, you can ask the bot to "tell my spouse I'm 15 minutes late and send it in a jokey tone" instead of communicating yourself. This is followed by news of a study suggesting that ChatGPT power users might be accumulating a "cognitive debt" from using the tool. In recent months, I've felt unmoored by all of this: by a technology that I find useful in certain contexts being treated as a portal to sentience; by a billionaire confidently declaring that he is close to making breakthroughs in physics by conversing with a chatbot; by a "get that bag" culture that seems to have accepted these tools without much consideration as to the repercussions; by the discourse. I hear the chatter everywhere -- a guy selling produce at the farmers' market makes a half-hearted joke that AI can't grow blueberries; a woman at the airport tells her friend that she asked ChatGPT for makeup recommendations. Most of these conversations are poorly informed, conducted by people who have been bombarded for years now by hype but who have also watched as some of these tools have become ingrained in their life or in the life of people they know. They're not quite excited or jaded, but almost all of them seem resigned to dealing with the tools as part of their future. Remember -- this is just the beginning ... right? This is the language that the technology's builders and backers have given us, which means that discussions that situate the technology in the future are being had on their terms. This is a mistake, and it is perhaps the reason so many people feel adrift. Lately, I've been preoccupied with a different question: What if generative AI isn't God in the machine or vaporware? What if it's just good enough, useful to many without being revolutionary? Right now, the models don't think -- they predict and arrange tokens of language to provide plausible responses to queries. There is little compelling evidence that they will evolve without some kind of quantum research leap. What if they never stop hallucinating and never develop the kind of creative ingenuity that powers actual human intelligence? The models being good enough doesn't mean that the industry collapses overnight or that the technology is useless (though it could). The technology may still do an excellent job of making our educational system irrelevant, leaving a generation reliant on getting answers from a chatbot instead of thinking for themselves, without the promised advantage of a sentient bot that invents cancer cures. Read: AI executives promise cancer cures. Here's the reality Good enough has been keeping me up at night. Because good enough would likely mean that not enough people recognize what's really being built -- and what's being sacrificed -- until it's too late. What if the real doomer scenario is that we pollute the internet and the planet, reorient our economy and leverage ourselves, outsource big chunks of our minds, realign our geopolitics and culture, and fight endlessly over a technology that never comes close to delivering on its grandest promises? What if we spend so much time waiting and arguing that we fail to marshal our energy toward addressing the problems that exist here and now? That would be a tragedy -- the product of a mass delusion. What scares me the most about this scenario is that it's the only one that doesn't sound all that insane.
[3]
Making cash off 'AI slop': The surreal video business taking over the web
Luis Talavera, a 31-year-old loan officer in eastern Idaho, first went viral in June with an AI-generated video on TikTok in which a fake but lifelike old man talked about soiling himself. Within two weeks, he had used AI to pump out 91 more, mostly showing fake street interviews and jokes about fat people to an audience that has surged past 180,000 followers, some of whom comment to ask if the scenes are real. The low-effort, high-volume nature of AI videos has earned them the nickname "AI slop," and Talavera knows his videos aren't high art. But they earn him about $5,000 a month through TikTok's creator program, he said, so every night and weekend he spends hours churning them out. "I've been on my couch holding my 3-month-old daughter, saying, 'Hey, ChatGPT, we're gonna create this script,'" he said. Nothing has transformed or polluted the creative landscape in the past few years quite like AI video, whose tools turn text commands into full-color footage that can look uncannily real. In the three years since ChatGPT's launch, AI videos have come to dominate the social web, copying and sometimes supplanting the human artists and videographers whose work helped train the systems in the first place. Their power has spawned a wild cottage industry of AI-video makers, enticed by the possibility of infinite creation for minimal work. Adele, a 20-year-old student in Florida who spoke on the condition that only her first name be used because she fears harassment, told The Washington Post she is taking a break from college to focus on making money from her AI-video accounts. Another creator in Arizona who went viral with an AI airport kangaroo said he made $15,000 in commissions in three months, speaking on the condition of anonymity out of concern over online harassment. But the flood of financially incentivized "slop" has also given way to a strange new internet, where social media feeds overflow with unsettlingly lifelike imagery and even real videos can appear suspect. Some viral clips now barely rely on humans at all, with AI tools generating not just the imagery but the ideas. "I think of it more as a science than an art," said one 25-year-old creator in Phoenix, who uses the online name Infinite Unreality and spoke on the condition of anonymity because he had received threats online. "In reality, there's not a whole lot of creativity happening. And whatever creativity is happening is coming from the computer." Some of the videos are otherworldly art pieces or cartoonishly goofy satires, carrying labels marking them as AI-made. But many others are deceptively realistic and styled as news reports, influencer posts or mean-spirited jokes, often in hopes they'll be shocking enough to grab attention -- and from there, revenue. Built on tools from America's biggest tech giants, offered free or at low cost, the videos have touched off a kind of existential panic among the purveyors of traditional art, fueling anxiety that they could crowd out filmmakers, journalists and other creators for whom every scene takes money and time. "As AI accelerates the production of content, human creativity will inevitably feel overwhelmed," said Tony Sampson, a senior academic at the University of Essex who studies digital communication. AI videos don't try to compete on "authenticity, aesthetic value or thought-provoking concepts," he said. Instead, they're pumped out at industrial speed for maximum engagement, relying on viewers' shock and fascination to make them spread. The creators themselves say that AI videos are inevitable, regardless of their impact, and that they enjoy experimenting on AI's cutting edge. They are also eager to reap the rewards of mass attention: Juan Pablo JimΓ©nez DomΓnguez, a 29-year-old creator known online as Pablo Prompt and who works at a university in the Canary Islands, said he has used AI to create videos for ad campaigns and now makes enough that he "could live entirely from this work." "A few months ago, we couldn't do half the things we can do now," he said. The technology, he added, will help "bring our ideas to life without the technical or financial blocks that used to hold us back." "A human being, just like you" The main benchmark for AI video is known as the Will Smith Eating Spaghetti test, and it works exactly how it sounds: A tool's progress is graded by how well it can make the actor look like he's chowing down. In 2023, the best versions looked muddy and deformed: Noodles oozed cartoonishly, eyes bugged out. This year's top performer, however, is practically undetectable as AI, save for one giveaway: The fake Smith makes crunching sounds, because the AI doesn't know how real spaghetti gets chewed. The quirk, in a Google-made tool called Veo 3, actually represents a major breakthrough for AI video: Unlike past tools, Veo 3 generates sound for every scene. And the progress continues rapidly: Google announced last month that the tool can now animate any photo into a lifelike eight-second clip. Every link of the AI-video supply chain has shown extraordinary progress over the last year, multiplying video-makers' production power. A creator might, for instance, draft video ideas and dialogue with ChatGPT, generate images with Midjourney, compose realistic voices with ElevenLabs and animate it all together with OpenAI's Sora, Meta's Movie Gen or a smaller upstart, such as Hailuo, Luma or Kling. In the late 2010s, amateurs used early, kludgy AI tools to splice women's faces into "deepfake" pornography, soliciting money for individual requests. But the newer tools have made the process so simple that basically anyone can use them -- as seen on Elon Musk's social network X, where users have prompted its AI tool Grok to create fake, explicit videos of Taylor Swift. "Five years ago, AI video was nonexistent to complete garbage. One year ago it was OK, not very usable, sort of just beginning," said Mark Gadala-Maria, a co-founder at the AI tool Post Cheetah who tracks video trends. "And today it's virtually indistinguishable from reality." The shift has unleashed a barrage of AI video onto the web. In May, four of the 10 fastest-growing YouTube channels by subscribers trafficked in AI videos, an analysis in Sherwood News found, including Masters of Prophecy ('80s-style synthwave music videos) and Chick of Honor (nonsensical animal skits). Beyond video, there is AI music; one band, Velvet Sundown, had its AI-generated folk song "Dust on the Wind" climb to the top of Spotify's Viral 50 charts, despite the fact that the bandmates don't actually exist. A single viral hit can spawn thousands of copycats, as with the videos of fake roller-coaster disasters, Bigfoot video diaries, jet-flying babies and JimΓ©nez DomΓnguez's cats flipping off a diving board. To stand out, some creators have built AI-generated influencers with lives a viewer can follow along. "Why does everybody think I'm AI? ... I'm a human being, just like you guys," says the AI woman in one since-removed TikTok video, which was watched more than 1 million times. The best-performing videos, Gadala-Maria said, have often relied on "shock value," such as racist and sexist jokes depicting Black women as primates, as first reported by Wired, or joking about what young "AI gals gone wild" would do for cash. Others have ventured into dreamlike horror. One video showing a dog biting a woman's face off, revealing a salad, has more than 250 million views. The major social media platforms, scared of driving viewers away, have tried to crack down on slop accounts, using AI tools of their own to detect and flag videos they believe were synthetically made. YouTube last month said it would demonetize creators for "inauthentic" and "mass-produced" content. But the systems are imperfect, and the creators can easily spin up new accounts -- or just push their AI tools to pump out videos similar to the banned ones, dodging attempts to snuff them out. "Humans are attracted to things that are over the top," Gadala-Maria said. "And AI is really good at that." "Slop money" The typical AI creator's first dollar comes from the video platforms themselves, through the kinds of incentive programs that TikTok, YouTube and Instagram built to reward viral success. Adele, the Florida student, shared a screenshot from her TikTok account showing she had made $886 within four days from an AI-made video showing a fake influencer eating glass fruit. The 20-year-old said she had recently paused her psychology studies to focus on her entrepreneurial goals, including her "AI Viral Club," which offers video-making guides to roughly 70 subscribers paying $29 a month. "I've seen a lot of my friends have a really hard time getting jobs, even with their degrees," she said. "This is the future." Like Adele, many creators have worked to diversify beyond viral payouts, selling AI-tool courses and templates to aspiring creators eager to make their own. After an AI video showing two women eating a Korean-style "mukbang" went viral, its creator began selling a $15 visual handbook on how others could copy its style. The creator, Jayla Bennett, who uses the account name "Gigglegrid.ai," said she is 26, works part-time in North Carolina and just started making AI videos this summer, seeing a chance at easy money. "The trick is to get ahead of the curve and not be a part of the wave," she said. Many creators also sell "prompt drops," listing the commands they gave the AI to make a certain scene, while others charge for custom-commissioned work. One creator said he's able to charge between $200 and $300 for a five-second clip. Even bigger deals are being made. The prediction-gambling company Kalshi paid for a TV commercial during the NBA Finals featuring AI people, including a woman being battered by a hurricane, screaming their bets about current events. Jack Such, a Kalshi spokesperson, said the video cost $2,000 in AI-prompting fees and went from idea to live in less than 72 hours, far quicker than a traditional studio could do. The creator, PJ Accetturo, said "high-dopamine" AI videos would be "the ad trend of 2025." Angst over AI has roiled the traditional media for years, helping ignite the Hollywood strikes in 2023 and legal battles over artists' rights. In June, U.S. District Judge Vince Chhabria, ruling narrowly for Meta in a lawsuit brought by authors accusing the company of violating copyright law by training AI on their books, said AI would "dramatically undermine the incentive for human beings to create things the old-fashioned way." The technology has steadily inched its way into filmmaking nevertheless. Ted Sarandos, a co-chief of Netflix, said last month that the streamer had recently used AI video tools for the first time to help animate a building collapse for an Argentine sci-fi show, and that the move had been cheaper and "10 times faster" than a traditional special-effects crew. Even for amateurs, AI video's ease of use has spawned a global business. Jiaru Tang, a researcher at the Queensland University of Technology who recently interviewed creators in China, said AI video has become one of the hottest new income opportunities there for workers in the internet's underbelly, who previously made money writing fake news articles or running spam accounts. Many university students, stay-at-home moms and the recently unemployed now see AI video as a kind of gig work, like driving an Uber. Small-scale creators she interviewed typically did their day jobs and then, at night, "spent two to three hours making AI-slop money," she said. A few she spoke with made $2,000 to $3,000 a month at it. "They see their business as internet traffic, chasing really short-term trends, some of which are three to four days long," said Patrik Wikstrom, a professor who oversaw Tang's research. "They don't really care about if this is morally sound or if this is creative. They're chasing the traffic. They're chasing the next thing." "They just want to be entertained" For the AI-video creator Daryl Anselmo, this moment recalls a similarly massive shift known as the "Demoscene," an underground movement in the '90s built on computer nerds tinkering with real-time 3D graphics before "Toy Story" and the Sony PlayStation made them mainstream. That era, too, stirred up apprehension among animators over the death of art. It also churned out a lot of slop. But the longtime video game artist has nevertheless gone all in on AI video, believing it offers a revolutionary new kind of artistic freedom. From his home in Vancouver, B.C., he describes himself as the creative director for a team of semiautonomous AI machines, each with its own task: visualizing ideas, generating images, animating scenes and stitching them together into experimental videos that are often skin-crawling and avant-garde. One morning, he takes photos of some leftover spinach and commands the system to make a spinach monster. It creates 10 different takes, and he chooses the leafiest and most menacing, telling the tools to examine the last video in the sequence and create the next-most-logical scene. All of the experimentation has cost him in the form of graphics-processing bills, which total thousands of dollars a month. But the videos have won him consulting work from companies eager to emulate what he calls his "grimoire" of AI tools. And they've gained him attention on social media, where he often feels he must fight the urge to give people what they want: the creepiest videos, the most over-the-top. He can now churn out phantasmagoric scenes of hollow-eyed monsters at a speed and quality that would have once required a specialized team, but the pace of advancement slightly freaks him out. The 10 seconds of high-quality video it took him 10 minutes to create last year now takes just two minutes and, he expects, will soon take just a few seconds. At that speed, he said, creators could start really pushing the boundaries, rolling out hyper-personalized commercials and interactive videos a viewer could shape in real time, like a video game come to life. "I don't know if we're prepared for the flood of generative media that's about to hit us," he said. Even for those with less experience than Anselmo, this level of AI power has changed the industry. The creator in Phoenix, Infinite Unreality, started playing around with AI video while working in IT for his dad's company, hoping to spark a content-creator career with "the most returns off minimal effort." He made his first videos by taking viral clips on Instagram Reels and throwing them into Sora, asking the AI to transform them into something new. His first viral hit, which he described as "some fat dude getting a massage," gained 30 million views, but many more have followed. He now uses AI in every part of his eight-step workflow, from generating ideas to adding his logo in postproduction to discourage video thieves. His widely shared video of a seemingly real kangaroo at an airport gate, he said, took him 15 minutes, mainly because it was the AI tool's idea; he had asked it to spit out something that would go viral, and it did. "I don't want to sit here and act like I'm this genius," he said. "I'm an entrepreneur." But he has been unnerved by some of the responses to his more popular videos, including threats from viewers who say he's summoning something dark he can't control. His videos of lizard-headed babies -- again, the AI tool's idea -- have been especially unpopular. "People are saying, 'This is disturbing.' But this is what you guys want to watch at the end of the day," he said. It's not all easy money because many of the companies charge for individual AI-processing tasks; he budgets himself about $100 a day in AI fees, and a single 10-second rendering costs him about $7.50 to make. But he still expects to be made obsolete in short order, saying he believes that "in a year from now, pretty much everything is going to be very easy for the average person to" make. His bigger fear, beyond how he'll pay his mortgage when that time comes, is a more existential one: about what all this limitless creation is doing to our brains. "When you have every single form of media, every possible thing you can think of at your fingerprints at all times," he said, "is anything exciting anymore?" Gadala-Maria, in contrast, isn't worried. He expects AI one day will cement itself as the most powerful medium for human storytelling -- not a meme or a novelty but an art form all its own. "People generally don't care how their entertainment is created," he said. "They just want to be entertained."
Share
Copy Link
An exploration of recent advancements in AI-generated content, from personality mimicking to the creation of viral videos, and their impact on society and media.
Recent developments in artificial intelligence have led to the creation of "generative agents" capable of closely mimicking human personalities. Researchers from Stanford University, Google DeepMind, and other institutions have developed an AI system that can simulate individual human decision-making behavior with impressive accuracy 1. In a study involving over 1,000 participants, these AI agents achieved an 85% match rate when compared to their human counterparts' responses on the General Social Survey 1.
This technology, while still in its infancy, suggests a future where predictive algorithms could potentially act as online surrogates for individuals. The implications of such advancements are profound, challenging our understanding of human uniqueness and raising questions about the deterministic nature of personality 1.
Source: Scientific American
Parallel to the development of personality-mimicking AI, there has been a surge in AI-generated video content across social media platforms. This phenomenon, dubbed "AI slop," has given rise to a new cottage industry of content creators who use AI tools to produce high volumes of videos with minimal effort 3.
These AI-generated videos range from surreal art pieces to deceptively realistic news reports and influencer posts. The technology behind these creations has advanced rapidly, with tools like Google's Veo 3 now capable of generating lifelike eight-second clips from a single photo 3.
The proliferation of AI-generated content is transforming the media landscape in significant ways:
Content Overload: Social media feeds are becoming inundated with AI-generated videos, many of which are designed to be shocking or attention-grabbing to maximize engagement 3.
Authenticity Concerns: The increasing realism of AI-generated content is blurring the lines between real and fake, making it difficult for viewers to discern authentic content 3.
Economic Disruption: Some AI content creators are reporting substantial earnings from their videos, with figures ranging from $5,000 to $15,000 per month 3. This new economic model is challenging traditional content creation industries.
Ethical Dilemmas: The use of AI to recreate deceased individuals, as seen in the case of Joaquin Oliver, raises complex ethical questions about consent, grief, and the exploitation of tragedy 2.
Source: The Atlantic
The rapid advancement of AI technology is not without its psychological toll. Many people report feeling a sense of disorientation or "losing their minds" when confronted with increasingly realistic AI-generated content 2. This sentiment is compounded by the often hyperbolic rhetoric surrounding AI advancements, with some tech leaders proclaiming a "new era for humanity" 2.
The phenomenon has even given rise to new terminology, such as being "AGI-pilled," referring to those who believe that artificial general intelligence is imminent and will dramatically reshape society 2.
As AI technology continues to evolve, several challenges and questions emerge:
Regulation: The need for policies to govern the creation and distribution of AI-generated content, especially concerning misinformation and deepfakes.
Creative Industries: The potential impact on human artists, filmmakers, and other creators who may find themselves competing with AI-generated content.
Media Literacy: The growing importance of educating the public on how to critically evaluate and identify AI-generated content.
Ethical Boundaries: Determining appropriate uses of AI, particularly in sensitive areas such as recreating deceased individuals or generating explicit content without consent 3.
As we navigate this new landscape, it's clear that the integration of AI into our media ecosystem will continue to challenge our perceptions of reality, creativity, and human uniqueness. The coming years will likely see intense debates and policy discussions as society grapples with the implications of these technological advancements.
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
12 Sources
Business
1 day ago
12 Sources
Business
1 day ago
Microsoft has integrated a new AI-powered COPILOT function into Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
9 Sources
Technology
1 day ago
9 Sources
Technology
1 day ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
1 day ago
10 Sources
Technology
1 day ago
Meta rolls out an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
5 Sources
Technology
16 hrs ago
5 Sources
Technology
16 hrs ago
Nvidia introduces significant updates to its app, including global DLSS override, Smooth Motion for RTX 40-series GPUs, and improved AI assistant, enhancing gaming performance and user experience.
4 Sources
Technology
1 day ago
4 Sources
Technology
1 day ago