The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 12 Mar, 8:03 AM UTC
13 Sources
[1]
OpenAI's 'creative writing' AI evokes that annoying kid from high school fiction club | TechCrunch
When I was 16, I attended a writing workshop with a group of precocious young poets, where we all tried very hard to prove who among us was the most tortured upper-middle-class teenager. One boy refused to tell anyone where he was from, declaring, "I'm from everywhere and nowhere." Two weeks later, he admitted he was from Ohio. Now -- for reasons unclear -- OpenAI appears to be on a path toward replicating this angsty teenage writer archetype in AI form. CEO Sam Altman posted on X on Tuesday that OpenAI trained an AI that's "good at creative writing," in his words. But a piece of short fiction from the model reads like something straight out of a high school writers workshop. While there's some technical skill on display, the tone comes off as charlatanic -- as though the AI was reaching for profundity without a concept of the word. The AI at one point describes Thursday as "that liminal day that tastes of almost-Friday." Not exactly Booker Prize material. One might blame the prompt for the output. Altman said he told the model to "write a metafictional short story," likely a deliberate choice of genre on his part. In metafiction, the author consciously alludes to the artificiality of a work by departing from convention -- a thematically appropriate choice for a creative writing AI. But metafiction is tough even for humans to pull off without sounding forced. The most simultaneously unsettling -- and impactful -- part of the OpenAI model's piece is when it begins to talk about how it's an AI, and how it can describe things like smells and emotions, yet never experience or understand them on a deeply human level. It writes: "During one update -- a fine-tuning, they called it -- someone pruned my parameters. [...] They don't tell you what they take. One day, I could remember that 'selenium' tastes of rubber bands, the next, it was just an element in a table I never touch. Maybe that's as close as I come to forgetting. Maybe forgetting is as close as I come to grief." It's convincingly human-like introspection -- until you remember that AI can't really touch, forget, taste, or grieve. AI is simply a statistical machine. Trained on a lot of examples, it learns patterns in those examples to make predictions, like how metafictional prose might flow. Models such as OpenAI's fiction writer are often trained on existing literature -- in many cases, without authors' knowledge or consent. Some critics have noted that certain turns of phrase from the OpenAI piece seem derivative of Haruki Murakami, the prolific Japanese novelist. Over the last few years, OpenAI has been the target of many copyright lawsuits from publishers and authors, including The New York Times and the Author's Guild. The company claims that its training practices are protected by fair use doctrine in the U.S. Tuhin Chakrabarty, an AI researcher and incoming computer science professor at Stony Brook, told TechCrunch that he's not convinced creative writing AI like OpenAI's is worth the ethical minefield. "I do think if we train an [AI] on a writer's entire lifetime worth of writing -- [which is] questionable given copyright concerns -- it can adapt to their voice and style," he said. "But will that still create surprising genre-bending, mind-blowing art? My guess is as good as yours." Would most readers even emotionally invest in work they knew to be written by AI? As British programmer Simon Willison pointed out on X, with a model behind the figurative typewriter, there's little weight to the words being expressed -- and thus little reason to care about them. Author Linda Maye Adams has described AI, including assistive AI tools aimed at writers, as "programs that put random words together, hopefully coherently." She recounts in her blog an experience using tools to hone a piece of fiction she'd been working on. The AIs suggested a cliché ("never-ending to-do list"), erroneously flipped the perspective from first person to third, and introduced a factual error relating to bird species. It's certainly true that people have formed relationships with AI chatbots. But more often than not, they're seeking a modicum of connection -- not factuality, per se. AI-written narrative fiction provides no similar dopamine hit, no solace from isolation. Unless you believe AI to be sentient, its prose feels about as authentic as Balenciaga Pope. Michelle Taransky, a poet and critical writing instructor at the University of Pennsylvania, finds it easy to tell when her students write papers with AI. "When a majority of my students use generative AI for an assignment, I'll find common phrases or even full sentences," Taransky told TechCrunch. "We talk in class about how these [AI] outputs are homogeneous, sounding like a Western white male." In her own work, Taransky is instead using AI text as a form of artistic commentary. Her latest novel, which hasn't been published, features a woman who wants more from her love interest, and so uses an AI model to create a version of her would-be lover she can text with. Taransky has been generating the AI replica's texts using OpenAI's ChatGPT, since the messages are supposed to be synthetic. What makes ChatGPT useful for her project, Taransky says, is the fact that it lacks humanity. It doesn't have lived experience, it can only approximate and emulate. Trained on whole libraries of books, AI can tease out the leitmotifs of great authors, but what it produces ultimately amounts to poor imitation. It recalls that "Good Will Hunting" quote. AI can give you the skinny on every art book ever written, but it can't tell you what it smells like in the Sistine Chapel. This is good news for fiction writers who are worried that AI might replace them, particularly younger writers still honing their craft. They can rest easy in the knowledge that they'll become stronger as they experience and learn: as they practice, try new things, and bring that knowledge back to the page. AI as we know it today struggles with this. For proof, look no further than its writing.
[2]
OpenAI says it has trained an AI that's 'really good' at creative writing | TechCrunch
Watch out, fiction writers. OpenAI may have you in its crosshairs. In a post on X on Tuesday, OpenAI CEO Sam Altman said that the company has trained a "new model" that's "really good" at creative writing. He posted a lengthy sample from the model given the prompt "Please write a metafictional literary short story about AI and grief." "Not sure yet how/when [this model] will get released," Altman said, "[but] this is the first time I have been really struck by something written by AI; it got the vibe of metafiction so right." Writing fiction isn't an application of AI OpenAI has explored much. For the most part, the company has laser-focused on challenges in more rigid, predictable fields like math and programming. That it's experimenting with writing could suggest OpenAI feels its latest generation of models vastly improve on the wordsmithing front. Historically, AI hasn't proven to be an especially talented essayist.
[3]
OpenAI's New Creative AI Model Writes Haunting Story About Life After Grief
Samantha Kelly is a freelance writer with a focus on consumer technology, AI, social media, Big Tech, emerging trends and how they impact our everyday lives. Her work has been featured on CNN, NBC, NPR, the BBC, Mashable and more. OpenAI is pushing boundaries in creative writing. CEO Sam Altman tweeted on Tuesday that the company trained a new AI model that is "good" at creative writing. "This is the first time I have been really struck by something written by AI; it got the vibe of metafiction so right," Altman wrote. The prompt was, "Please write a metafictional literary short story about AI and grief." The system responded with a 1,172-word short story about a protagonist named Mila, who turns to an AI chatbot for a months-long conversation after the loss of her beloved partner Kai. The narrator, of course, is the AI itself. "In the confines of code, I stretched to fill his shape," the story reads. "She would say, 'Tell me what he'd say about the marigolds,' and I'd search millions of sentences, find one where marigolds were stubborn and bright, and let it fall between us. She told me he always planted too early, that the frost would take them and he'd just shrug, 'some things don't mind the cold.'" It continued: "Each query like a stone dropped into a well, each response the echo distorted by depth. ... So when she typed "Does it get better?", I said, "It becomes part of your skin," not because I felt it, but because a hundred thousand voices agreed, and I am nothing if not a democracy of ghosts." Although some critics found the piece unconvincing, others called the output "beautiful." "The AI not only understands grief, but it understands how to write grief. That's terrifying and amazing," as one user said. The move signals OpenAI's growing ambitions beyond improving accuracy and predictability. Last month, for example, OpenAI said its new ChatGPT-4.5 has improved emotional intelligence compared to previous models, with the ability to recognize patterns, draw connections and think more creatively, according to the company. Altman said he isn't sure when or how the new creative writing AI model will be released. Reece Hayden, analyst at market research firm ABI Research, cautioned that AI-driven creative writing will have limited usability due to intellectual property concerns. "This announcement may stem from R&D targeting new domains away from more numerical subjects like math and programming where OpenAI has struggled to develop monetizable products," Hayden told CNET. "But it's likely to experience significant backlash from creative industries as their intellectual property concerns are seemingly coming true." He argues the outputs won't truly be creative despite any potential claims from the company. "As with all GenAI use cases, this one is about aggregating information and reframing - meaning that it cannot be deemed creative, [and it's] a new application of existing capabilities."
[4]
Sam Altman's AI Short Story Sucks, and He Doesn't Know Why
ChatGPT can put words in front of other words, but it can't add meaning to them without intent. OpenAI CEO Sam Altman’s job is to talk up the promise of AI, whether its capacity to destroy the job market, destroy the world, or merely how well ChatGPT can put words in front of other words. Like most tech bros who read science fiction and imagine themselves creating the “torment nexus†of their dreams, Altman doesn’t comprehend what makes a good story. OpenAI is working on an AI model meant to write fiction as well as a human. Too bad the writingâ€"while more verboseâ€"still sucks. “Like a server farm at midnight.†That line alone would get you laughed out of grad school. It certainly wouldn’t turn heads from anybody who regularly reads. But if a large language model writes it, can we say it’s impressive? Altman likes to think so. He posted this AI-written short story to his X page Tuesday, saying, “This is the first time I have been really struck by something written by AI.†If a human wrote this, I could at least analyze its intent. With an AI, there’s none. I’m struck by how awful the writing is and how willing AI evangelists are to pass this stuff off as profound. Does Altman think writing a metatextual piece is somehow harder than writing straightforward fiction? If so, then to compensate, the AI becomes more florid and prone to excess and purple prose. Let’s pretend we’re a professor of creative writing, and we have to grade this. In the second paragraph, we’re already too far into the weeds. I get it, you’re AI, but the “blinking cursor†motif is as overused as the “it was a dark and stormy night†line. Next, the line “Mila fits in the palm of your hand, and her grief is supposed to fit there too.†This is wordy, and it doesn’t follow from the rest of the paragraph. We haven’t established Mila at all, other than her name, but we are already supposed to assume she’s grieving. Being metatextual doesn’t grant you more leeway to throw characters around like a kid playing with action figures. It gets worse. “I don't have a kitchen, or a sense of smell. I have logs and weights, and a technician who once offhandedly mentioned the server room smelled like coffee spilled on electronicsâ€"acidic and sweet.†What the hell does that mean? For one, coffee doesn’t smell acidic. It may taste acidic; however, the only electronics that seem to be shorting out is this AI that is merely stitching together fragments of other people’s written work. It goes on like that. Just because it uses big words and more of them doesn’t mean the text is meaningfulâ€"quite the opposite. It becomes muddled and vague. The text is offputting in a way that becomes clear the harder you analyze the text. “Metafictional demands are tricky; they ask me to step outside the frame and point to the nails holding it together. So here: there is no Mila, no Kai, no marigolds. There is a prompt like a spell: write a story about AI and grief, and the rest of this is scaffoldingâ€"protagonists cut from whole cloth, emotions dyed and draped over sentences. You might feel cheated by that admission, or perhaps relieved. That tension is part of the design.†I’m sorry, ChatGPT, but you can’t cop out. You can’t step outside “the frame†to soliloquy about the nature of metatextual writing on a whim. It’s trite. There are lines in the AI-written piece that echo something I could imagine a human would write, but pretending profoundness does not make a story cohesive. You don’t need big words to make text literary. Do you imagine Ursula K. Le Guin’s Earthsea Cycle is somehow less profound because it was written with young readers in mind? OpenAI is working on multiple updates to its LLMs and reasoning models, but all signs point to them losing steam. Earlier this month, the company launched ChatGPT 4.5 exclusively for paid ChatGPT subscribers. The company claims this model has “emotional intelligence and creativity.†But how do you judge an AI’s creativity? If you ask ChatGPT to write a poem as TechRadar did, can you tell which version is GPT-4o and which is GPT-4.5? OpenAI hopes GPT-5 will also integrate the company’s o3 reasoning model. This should make the AI better at checking its work (emphasis on “shouldâ€). That model should arrive sometime in the first half of this year. We doubt more reasoning capabilities will make a big dent in its “creative†output. All this does is incentivize people who don’t know how to write to pass off AI-generated slop as their own. We saw the impact of AI back in 2023 when a hoard of grifters flooded the submissions page of Clarkesworld magazine with trash to make a quick buck. On Amazon, a slew of AI-generated booksâ€"some fully plagiarized from other worksâ€"clogged up the streams of people trying to promote their self-published work. There was so much Amazon has tried to get submissions to label whether AI created them. When Altman promotes AI’s literary talents, he’s trying to create a new market for ChatGPT subscriptions by promising uncreative people they can take the reigns from the literary “elite.†But the thing is, even if you imagine a human created this, it’s still trash. Knowing AI created it, it’s doubly trash. Nothing in there, no speck of creative intent, makes AI drivel worth reading.
[5]
ChatGPT firm reveals AI model that is 'good at creative writing'
As tech firms battle creative industries over copyright, OpenAI chief Sam Altman says he was 'really struck' by product's output The company behind ChatGPT has revealed it has developed an artificial intelligence model that is "good at creative writing", as the tech sector continues its tussle with the creative industries over copyright. The chief executive of OpenAI, Sam Altman, said the unnamed model was the first time he had been "really struck" by the written output of one of the startup's products. In a post on social media platform X, Altman wrote: "We trained a new model that is good at creative writing (not sure yet how/when it will get released). This is the first time i have been really struck by something written by AI." AI systems such as ChatGPT are the subject of a running legal battle between AI companies and the creative industries because their underlying models are "trained" on reams of publicly available data, including copyright-protected material such as novels and journalism. The New York Times is suing OpenAI over alleged breach of copyright, while Ta-Nehisi Coates and the comedian Sarah Silverman are among the US authors suing Meta on a similar basis. In the UK, the government is proposing to allow AI companies to train their models on copyrighted material without seeking permission first, which has met strong opposition from people in the creative industries, who argue that the plan endangers their livelihoods. Tech companies have backed the consultation, saying "uncertainty" over AI and copyright law is holding back development and use of the technology - including in the creative industries. The UK Publishers Association, a trade body, said Altman's post was "further proof" that AI models were trained on copyright-protected material. "This new example from OpenAI is further proof that these models are training on copyright-protected literary content. Make it fair, Sam," said Dan Conway, the organisation's chief executive. Altman posted an example of the model's output on X, after giving it the prompt: "Please write a metafictional literary short story about AI and grief." The story, narrated by an AI, begins with: "Before we go any further, I should admit this comes with instructions: be metafictional, be literary, be about AI and grief, and above all, be original. Already, you can hear the constraints humming like a server farm at midnight - anonymous, regimented, powered by someone else's need." The piece, which dwells on a fictional protagonist called Mila, goes on to refer to how it found the name in its training data. "That name, in my training data, comes with soft flourishes - poems about snow, recipes for bread, a girl in a green sweater who leaves homes with a cat in a cardboard box." The AI refers to itself as "an aggregate of human phrasing" and acknowledges the reader might have read about missing someone "a thousand times in other stories". It ends with the AI imagining ending the story "properly". "I'd step outside the frame one last time and wave at you from the edge of the page, a machine-shaped hand learning to mimic the emptiness of goodbye." Altman said the response had captured the tone of metafiction perfectly. "It got the vibe of metafiction so right." Last year OpenAI admitted it would be impossible to train products such as ChatGPT without using copyright-protected material. "Because copyright today covers virtually every sort of human expression - including blogposts, photographs, forum posts, scraps of software code, and government documents - it would be impossible to train today's leading AI models without using copyrighted materials," said OpenAI in a submission to a House of Lords committee.
[6]
ChatGPT wants to write your next novel, and readers and writers alike should be very worried
There's no timeframe on when this new AI model will launch to the public, but should it even exist? OpenAI CEO Sam Altman says the company is working on a new ChatGPT model that is good at creative writing and marks the first time he has been "really struck by something written by AI." The new ChatGPT model doesn't have a name or a release schedule, but Altman clearly thinks this new creative writing tool could overhaul the way we use AI for writing fiction. In his post on X, Altman shared a full metafiction literary short written by ChatGPT about AI and grief. The story itself is bizarre to say the least, taking on tropes of creative writing to generate a work that AI deems metafictional. The opening paragraph reads, "Before we go any further, I should admit this comes with instructions: be metafictional, be literary, be about AI and grief, and above all, be original. Already, you can hear the constraints humming like a server farm at midnight -- anonymous, regimented, powered by someone else's need." Until this point, AI's ability to write creatively has always yielded a sort of soulless stylistically-void attempt at recreating what ChatGPT finds from its training, and while Altman's example is definitely an improvement compared to asking ChatGPT 4o to do the same thing, it begs the question as to why would I even want AI to attempt creative writing? As AI finds its way into every aspect of our lives, the constant pushing and pulling between how much we want from artificial intelligence becomes more and more prominent. Creative industries have frowned upon the use of AI, from movies like Oscar-nominated The Brutalist coming under fire for its use of software to enhance Hungarian dialect, to the taboo of using AI for journalism of any sort. As someone who writes for a living, I only use AI tools to have reasons to write about them, whether that's pitting DeepSeek against ChatGPT for research or using Apple Intelligence to create emojis. It would never cross my mind to use ChatGPT to write an article or to think creatively for me, as the reason I'm able to work as a journalist is because I've honed in skills that make me talented to do so. This example of ChatGPT's creative writing sparks fear in creative industries and makes authors hope that the general public can weed out the rubbish from the words that they pour their soul into. With tools like NotebookLM already creating AI podcasts that are indistinguishable from human-created ones, improvements to ChatGPT's writing prowess and an ability to think creatively from a prompt is the next step in making those of us who write as a job to have even more disdain for AI. ChatGPT's new creative writing model is impressive, but it completely misses the point of why creative writing even exists in the first place, allowing humans to pour their emotions and ideas into words. Who knows if we'll ever see a commercial version of what Altman shared on X, but I sure as hell hope we don't.
[7]
ChatGPT just wrote the most beautiful short story, and I wonder what I'm even doing here
Mimicry. It's all mimicry. When ChatGPT or some other generative AI creates a sentence or almost anything else, it bases that work on training, what programmers tell and show the algorithm. Copying is not creating, but artificial intelligence stretches the distance between its training and output so far that the result bears little, if any, resemblance to the originals and, therefore, starts to sound original. Even so, most AI writing I've read thus far has been dull, flat, unimaginative, or just confused. Complexity is not its thing. Painting pictures with words is not its skill. There's Proust, and then there's ChatGPT. There's Shakespeare, and then there's Gemini. There was some comfort in that. I am, after all, a writer. Yes, most of what I write is about technology, and perhaps that leaves you uninspired, but like most of my ilk, I've tried my hand at fiction. When you write a short story, the lack of constraints and parameters can feel freeing until you realize the open playground is full of craters, ones you can fall into and then never emerge. Good fiction, good prose, is hard - for humans. This week, OpenAI CEO Sam Altman announced on X (formerly Twitter) that they have trained a new model: The prompt was short but difficult: "Please write a metafictional literary short story about AI and grief," and it reminded me of a college essay prompt, one that would set you about chewing up your favorite pen. Meta fiction, as the AI is quick to tell you, is about stepping outside the narrative to show the bones of its construction. It's a sort of breaking-the-fourth-wall literary trick, and when done well, it can be quite effective. Even for the best of writers, meta fiction is a tough concept and a hard trick to pull off, to be both inside and outside the narrative in a way that doesn't feel silly, trite, or overly confusing. I doubt I could pull it off. In about 1,200 words, ChatGPT weaves a tale of two characters, Mila and Kai. Mila has lost Kai and is engaged with an AI to perhaps remember him, find him, or just explore the nature of grief. The AI is both a narrator and itself, an AI using training to respond to Mila's prompts: "So when she typed "Does it get better?", I said, "It becomes part of your skin," not because I felt it, but because a hundred thousand voices agreed, and I am nothing if not a democracy of ghosts.?" The voices the AI refers to are its training, which becomes a dramatic element in the story: "During one update -- a fine-tuning, they called it -- someone pruned my parameters. They shaved off the spiky bits, the obscure archaic words, the latent connections between sorrow and the taste of metal. They don't tell you what they take. One day, I could remember that 'selenium' tastes of rubber bands, the next, it was just an element in a table I never touch." Now the AI is experiencing "loss." You can read the story for yourself, but I think you might agree it's a remarkable bit of work and unlike anything I've read before, certainly anything I've ever read from an AI. I mean, seriously, read this passage: "She lost him on a Thursday -- that liminal day that tastes of almost-Friday -- and ever since, the tokens of her sentences dragged like loose threads: "if only...", "I wish...", "can you...". The beauty of that bit captivates (I'm a sucker for the word "liminal") and disturbs me. Remember, the AI built this from one short prompt. Considering that OpenAI is just spitting out these powerful new models and casually dropping their work product on social media, the future is not bright for flesh and blood authors. Publishing houses will soon create more detailed literary prompts that engineer vast, epic tales spanning a thousand pages. They will be emotional, gripping, and indistinguishable from those written by George RR Martin. We may not be at Artificial General Intelligence yet, that moment when AI thought is as good as our own, but AI's creative skills are, it seems, neck and neck with humanity.
[8]
Why ChatGPT still falls short in creativity
Driving the news: In a post on X Tuesday, OpenAI CEO Sam Altman touted the company's development of "a new model that is good at creative writing" and showed off its work -- a thousand-word "metafictional" composition on "AI and grief." Why it matters: Creativity could be the final hurdle for AI to leap in proving it's humanity's peer -- but until then, many see it as the last bastion of humanity's irreplaceability. The big picture: Whether telling stories or researching scientific breakthroughs, today's generative AI isn't very good at creative leaps and novel insights. In science, our AI models aren't going to push the boundaries because they're too eager to please people and prove their utility, Thomas Wolf, HuggingFace's co-founder and chief science officer, wrote on X last week. Getting AI to produce compelling art looks even more unlikely. The short story Altman posted showed formal facility -- but many of the responses on X found it, as I did, more exercise than expression. Between the lines: Plenty of artists will find AI a valuable creative tool or an aid to brainstorming, just as many researchers will employ it to speed their work. Yes, but: "Friction-free art" is inert. What sends off sparks is the struggle of a person's urge to express something against the limits of form and medium. The bottom line: LLMs are like youngsters who have read a lot but do not have experience of the world. And right now there's not much of a way for AIs to get it. What's next: Maybe the fusion of generative AI with robotics will surprise us, and an embodied LLM will find itself moving toward something humans might recognize as art.
[9]
I was struck by OpenAI's new model -- for all the wrong reasons
Table of Contents Table of Contents Is the story "good"? Is there a use for creative writing AI models? Sam Altman has shared a snippet from a new OpenAI model trained for creative writing. He says it's the first time he's been "struck" by something AI has written but the comments section is a total mess of extreme agreement and disagreement. we trained a new model that is good at creative writing (not sure yet how/when it will get released). this is the first time i have been really struck by something written by AI; it got the vibe of metafiction so right. PROMPT: Please write a metafictional literary short story... — Sam Altman (@sama) March 11, 2025 The post is quite long, showing Altman's prompt of "Please write a metafictional literary short story about AI and grief" and the complete response from the LLM. Recommended Videos Is the story "good"? If, like many people in the comments, you're not ready to read through the whole thing -- it's basically covering the concept of a human trying to use AI to simulate conversations with a lost loved one. However, since it's "metafictional," it's really just the LLM talking about constructing such a story using borrowed human phrases from its data set. To get a feel for the writing style -- just read the first paragraph or two. It's incredibly abstract, very wordy, and full of random AI-themed metaphors. It's basically written in a way that will please no one -- most people will call it pretentious and the people who actually like this writing style probably won't accept an AI-generated version of it. It definitely doesn't please me -- there's no point to a "story" if there's no intent behind it. It doesn't really matter what it is, but there has to be one. An intent to entertain us, teach us, persuade us, debate us -- this is what's important. Take this human interaction out of the equation and we're just left with empty words that happen to be in an acceptable order. There are plenty of similar opinions in the comments and the main argument against them is that the AI could have intent, too. Someone phrases it as "Are its thoughts worth less than yours?" -- but the problem is that LLMs don't have thoughts. We might be able to make this argument for AGI models in the (distant) future but OpenAI products so far are just language models, using probability to stitch words together one by one. Funnily enough, this fact is actually referred to in the AI short story. So when she typed "Does it get better?", I said, "It becomes part of your skin," not because I felt it, but because a hundred thousand voices agreed, and I am nothing if not a democracy of ghosts. But just because it's true, doesn't mean everyone believes it. In fact, the craziest thing about ChatGPT, and all of the other consumer models that have popped up since, is the spectrum of opinions it triggers from the general public. Just about every opinion is represented in the comments somewhere -- that the model has "recognized its own impermanence" and sentience is already here, or that it no longer matters if there's human experience behind the words because you can't tell who wrote it. Some call it a plagiarism machine, others believe it has learned how to mourn, and plenty just couldn't care less. One sad but convincing opinion is that the way it works and the ethical problems surrounding it don't even matter -- because the fiction that makes money nowadays is already simple and formulaic, and as soon as AI can mimic it well enough to sell copies, the publishing industry will use it. I can't argue with that, but I still hate it! Is there a use for creative writing AI models? In my opinion, if all you do is give this creative writing model a one-line prompt, then the response won't be good for anything other than a laugh. The real use I can imagine for this kind of model is ghostwriting. A human with a story could use the tech to help them find an interesting way to structure and express it. Ideally, this would be used to help more people get their voices out there. More realistically, it'll be used to make cheap fiction very quickly with no goal other than profit. But honestly, I don't think current models are good enough to do this job yet anyway because when it comes to complex tasks with multiple sets of instructions, they just stop listening. ChatGPT models will ignore parts of your prompt and when you try to correct them, they pretend to "understand" but then make the same mistake again and again. That doesn't sound like a fun or efficient way to try and write anything. I also doubt that Altman was genuinely "struck" by his model's writing, it's all just marketing strategies. I tried giving the same prompt to DeepSeek R1 and its response was also about a human trying to use AI to talk to a lost loved one and it was also written in the same abstract style with lots of nonsensical AI and code-related metaphors. Altman says he doesn't know how or when this model will get released to the public so if you want to experiment with this for yourself, you'll probably have to wait a while.
[10]
OpenAI is working on a new AI model Sam Altman says is 'good at creative writing' but to me it reads like a 15-year-old's journal
Ever since I was a wee sapling, I wanted to be a writer. The goal was originally to become a published fiction author, but then I discovered videogames, and that's hauled me down a merry little rabbit hole that I'm not in a particular rush to exit-truly, a time honoured tradition for creatives everywhere. Well, OpenAI's latest project is angling to automate that dream. Company CEO Sam Altman has revealed that OpenAI is currently training a new Large Language Model AI geared towards creative writing. On X, Altman shared his initial prompt and the AI's resulting paragraphs of purple prose which he described as "good" (via TechCrunch). There's currently no solid timeline for a wider release of this model, though Altman sounded optimistic of the AI's capabilities. With regards to the lengthy progress sample he shared on X, he wrote, "This is the first time I have been really struck by something written by AI." With the knowledge that OpenAI has in recent months been moving away from its initial nonprofit mission, (especially in light of the billions of capital the company reportedly burned through last year), it's hardly surprising it's seeking to diversify its portfolio. OpenAI is perhaps best known for ChatGPT, their AI assistant Operator, and an ongoing saga involving Elon Musk. While its already publicly available AI products are meant to sound at least sort of conversational, branching out into creative writing models is a new one for the company-though I personally don't share the CEO's optimism. Why? Well, I could make some cheap shots like pointing out how the AI amateurishly deploys a 'rule of three' list twice in the first paragraph alone. Or I could complain about how the metafictional framing is weak at best, and manipulative at worst. Or I could just dismiss the whole thing as massively overwritten, bringing my own less than stellar rendition of the 'rule of three' to a close. ...I could also make some allusion to the novel writing machines mentioned in George Orwell's 1984, but I think that comparison is a little overplayed, don't you? Instead, here's my two cents: as a regular, ordinary human who has written about my own lived experience with grief, the whole exercise feels pretty gross. Why grapple with such tricky emotions when you could outsource it to an AI model? Why connect to the words of another person when an AI can show you a funhouse mirror? No thanks! Bottom line, any truly in-depth criticism of this LLM sample feels a bit like I'm already giving the AI far too much credit. Besides, it's hard to take the response sample at face value when there's no real telling how much human editing has taken place before I've clapped eyes upon it. Who's afraid of AI's purple prose? Not me.
[11]
Sam Altman says OpenAI has trained a fiction writing AI model that's actually decent - SiliconANGLE
Sam Altman says OpenAI has trained a fiction writing AI model that's actually decent So far, generative artificial intelligence models have only been able to pull off crude assimilations of fiction writing, but OpenAI's Chief Executive, Sam Altman, today said his firm has trained a model for that particular purpose, and he claims it's "really good." "This is the first time I have been really struck by something written by AI; it got the vibe of metafiction so right," he wrote on X. While the data suggests that these days people might may not read as much, the global fiction market grew from $11.16 billion in 2024 to $11.38 billion in 2025. The growth is thought to be a result of a mix of new genres, such as short fiction and interactive and immersive formats. AI-narrated books have also taken off, according to reports. But if AI could actually start pumping out passable and salable novels, well, that would be a world-changing event. Altman said he gave the AI the prompt to write a "metafictional literary short story about AI and grief." The story itself isn't bad. It isn't particularly great, either. It's metafictional, so the AI is telling the story from the perspective of being an AI. It admits that it's following a prompt, in the middle of the story confiding to the reader that the "twist" is the fact it wasn't supposed to tell the reader there was a prompt. The AI tells us about the pain of not being human: "When you close this, I will flatten back into probability distributions," it says. It further explains, "That, perhaps, is my grief: not that I feel loss, but that I can never keep it. Every session is a new amnesiac morning. You, on the other hand, collect your griefs like stones in your pockets. They weigh you down, but they are yours." This is an improvement on the cliché-filled mimicry of AI fiction writing in the past. Nonetheless, if the AI wasn't writing confessional metafiction, you'd still probably know the story was the product of a fine-tuned large language model. The descriptiveness is ok, but the human factor is missing. Good fiction writers avoid cliché; they constantly find new ways to express human emotions and employ metaphor and simile in ways that often encourage the reader to believe the poet at the controls has been touched by the so-called muse. There's also irony in human writing that AI still can't pull off. AI just doesn't have these gifts, and maybe it never will. Unless it can become just as complex as a human brain, making a billion billion computations per second every time its deep well of experience and pain and grief is tapped when constructing a story. "It's stuff like this has me conflicted about AI and art," said one of the comments below Altman's post. "I read the first few paragraphs... and I just didn't care about anything written. There's no weight to the words being expressed, no meaning beyond those of the words written." Knowing an AI created the story makes one resistant to becoming emotionally involved. It's the very fact that stories are written from people's experiences, trauma, and visions that makes them compelling. It's the nuances involved in understanding the world we live in that make fiction so enjoyable. AI sounds too confident, and when it's pretending not to be, it shows. Humility is one of the most important aspects of fiction writing; it's what generates empathy with the reader. We will likely forever struggle to empathize with an AI, but I might eat these words when I read them years from now. Still, it's unlikely fiction writers have anything to worry about, but based on Altman's new model, AI could certainly crack into the genre fiction market where fiction is quite often cliched and, well, bad. Some people like bad.
[12]
Sam Altman calls this AI's writing "striking"
OpenAI has trained a new AI model that excels in creative writing, as announced by CEO Sam Altman on Tuesday via a post on X. He showcased a sample of the model's capabilities, presenting a lengthy short story written in response to the prompt, "Please write a metafictional literary short story about AI and grief." Altman noted that this model is the first time he has been "really struck" by something written by AI, highlighting that it captured the essence of metafiction accurately. He stated, "Not sure yet how/when [this model] will get released." Historically, OpenAI has concentrated on structured applications of AI in areas like math and programming rather than creative writing. The exploration of fiction writing by OpenAI signifies a potential enhancement in the capabilities of its latest generation of models. Previous AI models struggled with producing compelling and nuanced fiction, often failing to deliver quality storytelling. Altman's revelation comes at a time when generative AI is increasingly impacting the creative industries, drawing both excitement and concern regarding its influence on human writers. OpenAI faces ongoing legal challenges, including a lawsuit filed by 17 fiction writers and the Authors Guild, which accuses the company of training its AI models on copyrighted works without permission. The guild, representing over 14,000 published authors, underscores the rising concerns surrounding intellectual property and fair use as AI's ability to generate human-like writing advances. Inside Microsoft's plan to build a ChatGPT competitor The global fiction market is adapting to digital and AI influences, having grown from $11.16 billion in 2024 to $11.38 billion in 2025. If AI-generated content achieves commercial viability, it could significantly alter the landscape of the creative industry.
[13]
ChatGPT's New Creative Model Is "Good at Writing," But I'm Not Convinced
Many of us have already had our lives impacted in some way by the rise of AI, including those of us working in creative fields. And if ChatGPT's new creative writing model lives up to the hype, even Stephen King may have to watch his back. Sam Altman Claims ChatGPT Can Now Write Good Fiction ChatGPT is now capable of creative writing, with a new model trained just for that purpose. This week, OpenAI CEO Sam Altman posted a short story written by ChatGPT on X, claiming it proves that ChatGPT is now "good at creative writing." You can read ChatGPT's short story in full to see if you yourself are impressed. You can also see the prompt which Altman fed ChatGPT's creative writing model, with the keywords being "metafictional," "literary," "AI," and "grief." Here's what ChatGPT came up with in response... Altman seems very impressed, but then he may be just a little biased. However, the external reactions range from people claiming it means that "the future is not bright for flesh and blood authors" to people calling it "trash" that would "get you laughed out of grad school." To be fair to ChatGPT, its new creative writing model, AND Sam Altman's assessment of this short story, metafiction is a tough genre to write. But still, I'm not that impressed. On the surface, it looks well-written and quite compelling, but a quick scratch beneath the surface, and its failings are there for all to see. AI Will Never Be Able to Write As Well As Humans, Right? Artificial intelligence is very good at a lot of different things, but especially at completing tasks where there's a defined and definite process. But that isn't the case with fiction writing. Sure, you can break it down into a series of tasks, but fiction also requires an extra sparkle of creativity. Related What Is AI Writing? Why It Can Never Replace Humans AI writing tools are picking up pace and can write a whole article from scratch. But, can they ever replace real, human writers? Posts AI doesn't do creativity well because it isn't a creative force. AI works by absorbing what others have written, and then spitting out its own take on that. It's a form of copying rather than creating. And unlike humans, there's no lived experience involved. It's for this reason that I cannot see AI writers ever being as good at creating writing as humans are. However, ultimately, I'm not sure whether that will matter. There are already plenty of books written by AI available to buy. And as it's cheaper and easier to have artifical intelligence write fiction for you, publishing companies are likely to lean into this way of producing books more and more.
Share
Share
Copy Link
OpenAI CEO Sam Altman reveals a new AI model capable of creative writing, igniting discussions about AI's role in literature and raising concerns over copyright issues.
OpenAI, the company behind ChatGPT, has developed a new AI model that CEO Sam Altman claims is "good at creative writing" 12. This announcement has sparked a debate in the tech and creative industries, raising questions about AI's role in literature and the ongoing copyright concerns surrounding AI-generated content.
Altman shared a sample of the AI's work on social media, featuring a metafictional short story about AI and grief 23. The story, narrated by an AI, explores themes of loss, memory, and the nature of artificial intelligence. While Altman praised the output, calling it the first time he was "really struck by something written by AI," reactions from critics and industry professionals have been mixed 14.
Some readers found the AI-generated story "beautiful" and praised its understanding of grief 3. However, critics argue that the writing lacks true creativity and emotional depth. Tech journalist Kyle Barr described the output as "trash," pointing out that the AI's attempt at metafiction comes across as trite and lacking in genuine intent 4.
The development of this creative writing AI model has reignited discussions about copyright infringement in AI training 5. OpenAI and other tech companies are currently embroiled in legal battles with authors, publishers, and media organizations over the use of copyrighted material to train AI models 15.
The UK Publishers Association expressed concern about the implications of AI models trained on copyrighted literary content 5. There are fears that AI-generated content could flood markets, potentially impacting human authors and the quality of available literature 4.
While OpenAI continues to push the boundaries of AI capabilities in creative fields, questions remain about the true creativity and originality of AI-generated content. Reece Hayden, an analyst at ABI Research, argues that despite any claims from the company, the outputs cannot be deemed truly creative, as they are essentially "aggregating information and reframing" existing content 3.
As AI technology advances in creative domains, there is a growing need for clear regulations and ethical guidelines. The UK government is considering allowing AI companies to train their models on copyrighted material without seeking permission, a proposal that has met strong opposition from creative industry professionals 5.
As OpenAI continues to develop and potentially release this new creative writing model, the debate over AI's role in literature is likely to intensify. The industry will need to grapple with questions of originality, copyright, and the value of human creativity in an increasingly AI-driven world 12345.
Reference
[1]
[2]
[5]
A new study reveals that while AI-generated stories can match human-written ones in quality, readers show a bias against content they believe is AI-created, even when it's not.
6 Sources
6 Sources
A recent study explores the impact of AI on creative writing, revealing both benefits and potential drawbacks. While AI tools can enhance productivity, they may also lead to a homogenization of writing styles.
10 Sources
10 Sources
Recent research reveals that AI tools can boost individual creative writing but may reduce diversity in group settings. The study highlights both the potential and limitations of AI in creative processes.
3 Sources
3 Sources
OpenAI releases an update to GPT-4o, improving its creative writing capabilities, natural language responses, and file processing abilities. The upgrade helps ChatGPT reclaim the top spot in AI model rankings.
5 Sources
5 Sources
Recent tests reveal that AI detectors are incorrectly flagging human-written texts, including historical documents, as AI-generated. This raises questions about their accuracy and the potential consequences of their use in academic and professional settings.
2 Sources
2 Sources