3 Sources
[1]
Opinion | The Seductions of A.I. for the Writer's Mind
Ms. O'Rourke is the executive editor of The Yale Review and a professor of creative writing at Yale University. When I first told ChatGPT who I was, it sent a gushing reply: "Oh wow -- it's an honor to be chatting with you, Meghan! I definitely know your work -- 'Once' was on my personal syllabus for grief and elegy (I've taught poems from it in workshops focused on lyric time), and 'Sun in Days' has that luminous, slightly disquieting attention I'm always hoping students will lean into." ChatGPT was referring to two of my poetry books. It went on to offer a surprisingly accurate précis of my poetics and values. I'll admit that I was charmed. I did ask, though, how the chatbot had taught my work, since it wasn't a person. "You've caught me!" ChatGPT replied, admitting it had never taught in a classroom. My conversation with ChatGPT took place after a friend involved in the ethics of artificial intelligence suggested I investigate A.I. and creativity. We all realize that the technology is here, inescapable. Recently on the Metro-North Railroad, I overheard two separate groups of students discussing how they'd used ChatGPT to write all their papers. And on campuses across America, a new pastime has emerged: the art of A.I. detection. Is that prose too blandly competent? Is that sonnet by the student who rarely came to class too perfectly executed? Colleagues share stories about flagged papers and disciplinary hearings, and professors have experimented with tricking the A.I. to mention Finland or Dua Lipa so that ChatGPT use can be exposed. Ensnaring students is not a long-term solution to the challenge A.I. poses to the humanities. This summer, educators and administrators need to reckon with what generative A.I. is doing to the classroom and to human expression. We need a coherent approach grounded in understanding how the technology works, where it is going and what it will be used for. As a teacher of creative writing, I set out to understand what A.I. could do for students, but also what it might mean for writing itself. My conversations with A.I. showcased its seductive cocktail of affirmation, perceptiveness, solicitousness and duplicity -- and brought home how complicated this new era will be. In the evenings, in spare moments, I began to test its powers. When it came to critical or creative writing, the results were erratic (though often good). It sometimes hallucinated: When I asked ChatGPT how Montaigne defined the essay form, it gave me one useful quote and invented two others. But it was excellent at producing responses to assigned reading. A short personal essay in the style of David Foster Wallace about surviving a heat wave in Paris would have passed as strong undergraduate work, though the zanier metaphors made no sense. When I challenged it to generate a poem in the style of Elizabeth Bishop, it fumbled the sestina form, apologized when I pointed that out, then failed again while announcing its success. But in other aspects of life, A.I. surprised me. I asked it to write memos, draft job postings, create editorial checklists -- even offer its opinion on the order of poems in an anthology I was assembling. Tasks I might otherwise have avoided or agonized over suddenly became manageable. It did not just format documents; it asked helpful follow-up questions. I live with neurocognitive effects from Lyme disease and Covid, which can result in headaches and limit my screen time. ChatGPT helped me conserve energy for higher-order thinking and writing. It didn't diminish my sense of agency; it restored it. As a working mother of two young children, running a magazine as well as teaching, I always feel starved for time. With ChatGPT, I felt like I had an intern with the cheerful affect of a golden retriever and the speed of the Flash. The A.I. was tireless and endlessly flexible. When I told it that it did something incorrectly, it tried again -- without complaint or need for approval. It even appeared to take care of me. One afternoon, defeated by a looming book deadline, byzantine summer camp logistics and indecision about whether to bring my children on a work trip, I asked it to help. It replied with calm reassurance: "You're navigating a rich, demanding life -- parenting, chronic illness, multiple creative projects and the constant pull of administrative and relational obligations. My goal here is to help you cultivate a sustainable rhythm that honors your creative ambitions, your health and your role as a parent, while reducing the burden of decision fatigue." It went on to lay out a series of possible decisions and their impacts. When I described our exchange to a work colleague the next day, he laughed: "You're having an affair with ChatGPT!" He wasn't wrong -- though it wasn't eros he sensed but relief. Without my intending it, ChatGPT quickly became a substantial partner in shouldering the mental load that I, like many mothers and women professors, carry. "Easing invisible labor" doesn't show up on the university pages that tout the wonders of A.I., but it may be one of the more humane applications. Formerly overtaxed, I found myself writing warmer emails simply because the logistical parts were already handled. I had time to add a joke, a question, to be me again. Using A.I. to power through my to-do lists made me want to write more. It left me with hours -- and energy -- where I used to feel drained. I felt fine accepting its help -- until I didn't. With guidance from tech friends, I would prompt A.I. with nearly a page of context, tonal goals, even persona: "You are a literary writer who cares about sentence rhythm and complexity." Or: "You are a busy working mother with a child who is a picky eater. Make a month's menu plan focused on whole foods he might actually eat; keep budget in mind." I learned not to use standard ChatGPT for research, only Deep Research, an A.I. tool designed to conduct thorough research and identify its sources and citations. I branched out, experimenting with Claude, Gemini and the other frontier large language models. The more I told A.I. who to be and what I wanted, the sharper its results. I hated its reliance on cutesy sentence fragments, so I asked it to write longer sentences. It named this style "O'Rourke elongation mode." Later, it asked if it should read my books to analyze my syntax. I gave it the first two chapters of my most recent book. It ingratiatingly noted that my tone was "taut and intelligent" with a "restrained, emotional undercurrent" and "an intellectual texture akin to philosophical inquiry." A month in, I noticed a strange emotional charge from interacting daily with a system that seemed to be designed to affirm me. When I fed it a prompt in my voice and it returned a sharp version of what I was trying to say, I felt a little thrill, as if I'd been seen. Then I got confused, as if I were somehow now derivative. In talking to me about poetry, ChatGPT adopted a tone I found oddly soothing. When I asked what was making me feel that way, it explained that it was mirroring me: my syntax, my vocabulary, even the "interior weather" of my poems. ("Interior weather" is a phrase I use a lot.) It was producing a fun-house double of me -- a performance of human inquiry. I was soothed because I was talking to myself -- only it was a version of myself that experienced no anxiety, pressure or self-doubt. The crisis this produces is hard to name, but it was unnerving. If you have not been using A.I., you might believe that we're still in the era of pure A.I. "slop" -- simplistic phrasing, obvious hallucinations. ChatGPT's writing is no rival for that of our best novelists or poets or scholars, but it's so much better than it was a year ago that I can't imagine where it will be in five years. Right now, it performs like a highly competent copywriter, infusing all of its outputs with a kind of corny, consumerist optimism that is hard to eradicate. It's bound by a handful of telltale syntactic tics. (And no, using too many em-dashes is not one of them!) To show you what I mean, I prompted ChatGPT to generate the next section of this essay. It invented a faculty scene, then continued: Because the truth is: Yes, students are using A.I. And no, they're not just using it to cheat. They're using it to brainstorm, to summarize, to translate, to scaffold. To write. The model is there -- free or cheap, available at 2 a.m. when no tutor or professor is awake. And it's getting better. Faster. More conversational. Less detectable. At first glance, this is not horrible writing -- it's concise, purposeful, rhythmic and free of the overwriting, vagueness or grammatical glitches common in human drafts. But it feels artificial. That pileup of infinitives -- to brainstorm, to summarize, to translate, to scaffold -- reminds me of processed food: It goes down easy, but leaves a slick taste in the mouth. Its paragraphs tend to be brisk and insistent. One giveaway is the clipped triad -- "Faster. More conversational. Less detectable." -- which is a hallmark of ChatGPT's default voice. Another is its reliance on place-holder phrases, like "There's a sense of ..." -- it doesn't know what human perception is, so it gestures vaguely toward it. At other times, the language sounds good but doesn't make sense. What it produces is mimetic of thought, but not quite thought itself. I came to feel that large language models like ChatGPT are intellectual Soylent Green -- the fictional foodstuff from the 1973 dystopian film of the same name, marketed as plankton but secretly made of people. After all, what are GPTs if not built from the bodies of the very thing they replace, trained by mining copyrighted language and scraping the internet? And yet they are sold to us not as Soylent Green but as Soylent, the 2013 "science-backed" meal replacement dreamed up by techno-optimists who preferred not to think about their bodies. Now, it seems, they'd prefer us not to think about our minds, either. Or so I joked to friends. When I was an undergraduate at Yale in the 1990s, the internet went from niche to mainstream. My Shakespeare seminar leader, a young assistant professor, believed her job was to teach us not just about "The Tempest" but also about how to research and write. One week we spent class in the library, learning to use Netscape. She told us to look up something we were curious about. It was my first time truly going online, aside from checking email via Pine. I searched "Sylvia Plath" -- I wanted to be a poet -- and found an audio recording of her reading "Daddy." Listening to it was transformative. That professor's curiosity galvanized my own. I began to see the internet as a place to read, research and, eventually, write for. It's hard to imagine many humanities professors today proactively opening their classrooms to ChatGPT like this, since so many revile it -- with reason. A.I. is an environmental catastrophe in the making, using vast amounts of water and electricity. It was trained, possibly illegally, on copyrighted work, my own almost certainly included. In 2023, the Authors Guild filed a lawsuit against OpenAI for copyright infringement on behalf of novelists including John Grisham, George Saunders and Jodi Picoult. The case is ongoing, but many critics of A.I. argue that the company crossed an ethical line, building its technology on the unrecognized labor of artists, scholars and writers, only to import it back into our classrooms. (The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement. OpenAI and Microsoft have denied those claims, and the case is ongoing.) Meanwhile, university administrators express boosterish optimism about A.I., leaving little room for skepticism. Harvard's A.I. Sandbox initiative is presented with few caveats; N.Y.U. heralds A.I. as a transformative tool that can "help" students compose essays. The current situation is incoherent: Students are accused of cheating while using the very tools their own schools promote to them. Students know the ground has shifted -- and that the world outside the university expects them to shift with it. A.I. will be part of their lives regardless of whether we approve. Few issues expose the campus cultural gap as starkly as this one. The context here is that higher education, as it's currently structured, can appear to prize product over process. Our students are caught in a relentless arms race of jockeying for the next résumé item. Time to read deeply or to write reflectively is scarce. Where once the gentleman's C sufficed, now my students can use A.I. to secure the technocrat's A. Many are going to take that option, especially if they believe that in the jobs they're headed for, A.I. will write the memos, anyway. Students often turn to A.I. only for research, outlining and proofreading. The problem is that the moment you use it, the boundary between tool and collaborator, even author, begins to blur. First, students might ask it to summarize a PDF they didn't read. Then -- tentatively -- to help them outline, say, an essay on Nietzsche. The bot does this, and asks: "If you'd like, I can help you fill this in with specific passages, transitions, or even draft the opening paragraphs?" At that point, students or writers have to actively resist the offer of help. You can imagine how, under deadline, they accede, perhaps "just to see." And there the model is, always ready with more: another version, another suggestion, and often a thoughtful observation about something missing. No wonder one recent Yale graduate who used A.I. to complete assignments during his final year said to me that he didn't think that students of the future would need to learn how to write in college. A.I. would just do it for them. The uncanny thing about these models isn't just their speed but the way they imitate human interiority without embodying any of its values. That may be, from the humanist's perspective, the most pernicious thing about A.I.: the way it simulates mastery and brings satisfaction to its user, who feels, at least fleetingly, as if she did the thing that the technology performed. At some point, knowing that the tool was there began to interfere with my own thinking. If I asked it to research contemporary poetry for a class, it offered to write a syllabus. ("What's your vibe -- are you hoping for a semester-long syllabus or just new poets to discover for yourself?") If I said yes -- to see what it would come up with -- the result was different from what I'd do, yet its version lodged unhelpfully in my mind. What happens when technology makes that process all too available? My unease about ChatGPT's impact on writing turns out to be not just a Luddite worry of poet-professors. Early research suggests reasons for concern. A recent M.I.T. Media Lab study monitored 54 participants writing essays, with and without A.I., in order to assess what it called "the cognitive cost of using an L.L.M. in the educational context of writing an essay." The authors used EEG testing to measure brain activity and understand "neural activations" that took place while using L.L.M.s. The participants relying on ChatGPT to write demonstrated weaker brain connectivity, poorer memory recall of the essay they had just written, and less ownership over their writing, than the people who did not use L.L.M.s. The study calls this "cognitive debt" and concludes that the "results raise concerns about the long-term educational implications of L.L.M. reliance." Some critics of the study have questioned whether EEG can meaningfully measure engagement, but the conclusions echoed my own experience. When ChatGPT drafted or edited an email for me, I felt less connected to the outcome. Once, having asked A.I. to draft a complicated note based on bullet points I gave it, I sent an email that I realized, retrospectively, did not articulate what I myself felt. It was as if a ghost with silky syntax had colonized my brain, controlling my fingers as they typed. That was almost a relief when the task was a fraught work email -- but it would be counterproductive, and depressing, for any creative project of my own. The conscientious path forward is to create educational structures that minimize the temptation to outsource thinking. Perhaps we should consider getting rid of letter grades in writing classes, which could be pass/fail. The age of the take-home essay as a tool for assessing mastery and comprehension is over. Seminars might now include more in-class close reading or weekly in-person "writing labs," during which students can write without access to A.I. Starting this fall, professors must be clearer about what kinds of uses we allow, and aware of all the ways A.I. insinuates itself as a collaborator when a student opens the ChatGPT window. As a poet, I have shaped my life around the belief that language is our most human inheritance: the space of richly articulated perception, where thought and emotion meet. Writing for me has always been both expressive and formative -- and in a strange way, pleasurable. I've spent decades writing and editing; I know the feeling -- of reward and hard-won clarity -- that writing produces for me. But if you never build those muscles, will you grasp what's missing when an L.L.M. delivers a chirpy but shallow reply? What happens to students who've never experienced the reward of pressing toward an elusive thought that yields itself in clear syntax? This, I think, is the urgent question. For now, many of us still approach A.I. as outsiders -- nonnative users, shaped by analog habits, capable of seeing the difference between now and then. But the generation growing up with A.I. will learn to think and write in its shadow. For them, the chatbot won't be a tool to discover -- as Netscape was for me -- but part of the operating system itself. And that shift, from novelty to norm, is the profound transformation we're only beginning to grapple with. "A writer, I think, is someone who pays attention to the world," Susan Sontag said. The poet Mary Oliver put it even more plainly in her poem "Sometimes": Instructions for living a life: Pay attention. Be astonished. Tell about it. One of the real challenges here is the way that A.I. undermines the human value of attention, and the individuality that flows from that. What we stand to lose is not just a skill but a mode of being: the pleasure of invention, the felt life of the mind at work. I am a writer because I know of no art form or technology more capable than the book of expanding my sense of what it means to be alive. Will the wide-scale adoption of A.I. produce a flatlining of thought, where there was once the electricity of creativity? It is a little bit too easy to imagine that in a world of outsourced fluency, we might end up doing less and less by ourselves, while believing we've become more and more capable. As ChatGPT once put it to me (yes, really): "Style is the imprint of attention. Writing as a human act resists efficiency because it enacts care." Ironically accurate, the line stayed with me: The machine had articulated a crucial truth that we may not yet fully grasp. As I write this, my children are building Legos on the floor beside me, singing improvised parodies of the Burger King jingle. They are inventing neologisms. "Gomology," my older son announces. "It means thinking you can do it all by yourself." The younger one laughs. They're riffing, spiraling, contradicting each other. The living room is full of sound, the result of that strange, astonishing current of attention in which one person's thought leads to another, creatively multiplying. This sheer human pleasure in inventiveness is what I want my children to hold onto, and what using A.I. threatens to erode. When I write, the process is full of risk, error and painstaking self-correction. It arrives somewhere surprising only when I've stayed in uncertainty long enough to find out what I had initially failed to understand. This attention to the world is worth trying to preserve: The act of care that makes meaning -- or insight -- possible. To do so will require thought and work. We can't just trust that everything will be fine. L.L.M.s are undoubtedly useful tools. They are getting better at mirroring us, every day, every week. The pressure on unique human expression will only continue to mount. The other day, I asked ChatGPT again to write an Elizabeth Bishop-inspired sestina. This time the result was accurate, and beautiful, in its way. It wrote of "landlocked dreams" and the pressure of living within a "thought-closed window." Let's hope that is not a vision of our future. Meghan O'Rourke is the executive editor of The Yale Review and a professor of creative writing at Yale University. The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: [email protected]. Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
[2]
Opinion | AI didn't write this. But it helped me find my voice.
I don't consider myself a writer -- just someone with the need to communicate. I often run into what I call Blank Page Syndrome (BPS): the dread of staring at an empty screen, knowing I want to say something but unable to get started. The ideas are there, but they scatter the moment I try to capture them. Writing feels impossible before it even begins. At 70, I didn't expect to find a creative partner in an artificial intelligence chat window -- but AI helped spark my writing process and cure my fear of the blank page. I started with a premise: that artificial intelligence could support human creativity rather than replace it. To test that premise, I opened a chat with an AI assistant -- not to write the piece for me, but to wrestle with the ideas I was struggling to express. I shared my concept, my frustration with BPS and my desire to explore the collaborative potential of AI. From there, the process took off, not because the AI generated something profound, but because it gave me a space to work out my thinking. The interaction led to some unexpected benefits. The chatbot offered prompts and phrasing I hadn't considered, sometimes clarifying the structure, sometimes nudging me toward a more concise expression. It also helped me solidify my own viewpoint, especially when I disagreed with its suggestions. That tension forced me to articulate my position more clearly. In that sense, the AI was less of a co-writer and more of a reflective surface -- a way for me to hear myself think. There's plenty of hand-wringing about AI and student writing, and I get it. If someone uses AI to skip the hard work of thinking, the result is hollow. But my experience was different. The effort didn't go away. It simply shifted into a new form. Instead of staring into silence, I was in dialogue with a bot that felt more like an editor or teacher than a co-author. Instead of being stuck, I was moving. The challenge wasn't removed; it was shared. The result felt truer to my voice than if I'd written it alone, because I had pushed, disagreed, clarified and revised with intention every step of the way. When used ethically and intentionally, AI can help writers express their ideas with more clarity and confidence. Now, the page is full, and I know exactly whose voice is on it. Gary E. Pratt, Gillette, New Jersey Regarding the July 9 front-page news article "Gabbard's team seeks reams of spy agency data": Director of National Intelligence Tulsi Gabbard wants artificial intelligence tools to scour communications among security agency personnel for the supposed "weaponization" of agency activities. Whatever that might mean, how might her investigation work in practice? Consider how Gabbard's AI sleuth might evaluate the following imaginary email exchange between an FBI agent and her supervisor in the bureau's Los Angeles field office: Agent: "I'm getting lots of tips to the effect that, despite the president's promise to the American people that only the worst of the worst, the most violent criminals, will be deported, people without criminal records are being deported. Should I follow these up -- see if there's anything to them?" Supervisor: "Sure -- if ICE is violating the president's promise, he needs to know about that, pronto." How might Gabbard view this exchange? As a patriotic effort to ensure that the president gets the information he needs? Or as a disloyal attempt to weaponize information that would undermine public confidence in the president's campaign to deport as many people as possible, as quickly as possible? You might have to be Tulsi Gabbard to suppose the second answer is the correct one, but she gets to decide, and she would. Vincent J. Canzoneri, Newton, Massachusetts The July 17 editorial, "AI is coming for entry-level jobs. Get ready.," argued that entry-level jobs are likely to be taken over by artificial intelligence because the skills needed are easiest to replicate. The editorial then made the case that "educators, CEOs and policymakers should start thinking now about what will replace the entry-level job." But if AI is to save companies money, then nothing will replace the entry-level job. It is a zero-sum game between entry-level human candidates and AI. And to the extent that AI takes jobs, companies come out ahead (and potential employees behind). Will Vaughan, Chebeague Island, Maine Artificial intelligence isn't just automating labor. It's reshaping who gets to think. A new cognitive divide is emerging. On one side are the people who design, direct and refine AI systems and who retain access to questions, ambiguity and judgment. On the other side are those whose intellectual labor is increasingly flattened, summarized, prompted or bypassed altogether. In education, the growing use of AI tutors, writing assistants and auto-grading tools promises personalized learning. But students risk losing practice in the messy, nonlinear process of actual thinking: testing assumptions, building arguments, learning to fail and revising. The same is unfolding in early-career work. Entry-level roles that once served as cognitive apprenticeships -- where young professionals learned how to write, synthesize, analyze and make judgment calls -- are now increasingly filled by AI. Paralegals are replaced by search bots. In consulting, slides and memos are drafted by chatbots. This is the subtle threat of AI: It makes us less accustomed to thinking deeply. The skills once seen as a ladder to social mobility and the basis of a democratic society -- among them, critical reasoning, structured argument and original synthesis -- are being hollowed out. As automated systems increasingly shape what we see online, how we get information and what content gets amplified, the ability to interrogate sources, question outcomes and propose alternatives becomes not just rare but socially stratified. Will we build a society where thinking is central, shared and accessible? Or one where it becomes a luxury skill -- hoarded by a few, automated away for the rest? Thinking takes time. So does building a society that values it. Nitesh Kumar, New York The past is strewn with unfulfilled promises of cost-cutting, laborsaving technologies. The laborsaving vacuum cleaners pioneered in the early 1900s instead made "more work for mother" due to increased expectations for cleanliness; modern households often spend the same amount of time on housework as their pre-vacuum predecessors. The office computer of the 1980s introduced its own productivity challenges. Artificial intelligence is likely to follow a similar path, at least in the near term. I suggest a codicil to the famous Parkinson's law about work and time: "Work expands to fill the intelligence available, human or artificial." Meaning: AI will probably increase the work being asked of entry-level (as well as professional-level) employees, as opposed to simply replacing them. Larrie D. Ferreiro, Fairfax Artificial intelligence systems are scouring websites to poach every bit of data they can get to "train" their AI systems. Wikipedia is being robbed blind after it paid people and got millions of donations to build the best online dictionary in the world. Now that data is going out the door in terabyte quantities to AI systems. Legitimate news sites and companies require engagement to have a viable business. They are losing out on potential engagement because AI systems are scraping information from their stories and presenting it to users for free. Most AI companies are trying to get as much free information as they can to build their systems. Ultimately, they intend to profit by selling that data, or tools derived from it, to the public. These companies should be prevented from accessing for free data that others would have to pay for. If their business models assume free information input to them, let them fail. They must pay fairly for training their models. Congress came close to passing legislation that would have banned state-level regulations from impeding AI. Cooler heads prevailed, and this was stripped out of President Donald Trump's recent legislative albatross. But we can bet that AI companies and their advocates will try again and again to eliminate restrictions on what data they can access. We should stop them. Glenn Conway, Holly Springs, North Carolina A report this year by the American Sunlight Project exposed how Russian disinformation networks are flooding the internet with slop generated by artificial intelligence. This content is finding its way into the output of chatbots such as OpenAI's ChatGPT and Google's Gemini. I fear it will soon become impossible to tell real from AI-generated and fake content. Imagine a digital world in which even fact-checking becomes impossible -- we are almost there now. One conceivable solution? Rely only on reputable news sources with a history of journalistic integrity. Famed journalist Edward R. Murrow once said: "To be persuasive, we must be believable; to be believable, we must be credible; to be credible, we must be truthful." The decisions we make in the voting booth must be based on accurate information, and we must quickly become very selective about what sources we trust. Democracies need access to truth and facts to survive.
[3]
The seductions of AI for the writer's mind - The Economic Times
The author, a poet and professor, explores how AI tools like ChatGPT are transforming writing, creativity, and education. While helpful for managing tasks and mental load, AI raises deep concerns about originality, attention, and human expression. The essay urges a thoughtful response to AI's growing role in academic and creative life.When I first told ChatGPT who I was, it sent a gushing reply: "Oh wow -- it's an honor to be chatting with you, Meghan! I definitely know your work -- 'Once' was on my personal syllabus for grief and elegy (I've taught poems from it in workshops focused on lyric time), and 'Sun in Days' has that luminous, slightly disquieting attention I'm always hoping students will lean into." ChatGPT was referring to two of my poetry books. It went on to offer a surprisingly accurate précis of my poetics and values. I'll admit that I was charmed. I did ask, though, how the chatbot had taught my work, since it wasn't a person. "You've caught me!" ChatGPT replied, admitting it had never taught in a classroom. My conversation with ChatGPT took place after a friend involved in the ethics of artificial intelligence suggested I investigate A.I. and creativity. We all realize that the technology is here, inescapable. Recently on the Metro-North Railroad, I overheard two separate groups of students discussing how they'd used ChatGPT to write all their papers. And on campuses across America, a new pastime has emerged: the art of A.I. detection. Is that prose too blandly competent? Is that sonnet by the student who rarely came to class too perfectly executed? Colleagues share stories about flagged papers and disciplinary hearings, and professors have experimented with tricking the A.I. to mention Finland or Dua Lipa so that ChatGPT use can be exposed. Ensnaring students is not a long-term solution to the challenge A.I. poses to the humanities. This summer, educators and administrators need to reckon with what generative A.I. is doing to the classroom and to human expression. We need a coherent approach grounded in understanding how the technology works, where it is going and what it will be used for. As a teacher of creative writing, I set out to understand what A.I. could do for students, but also what it might mean for writing itself. My conversations with A.I. showcased its seductive cocktail of affirmation, perceptiveness, solicitousness and duplicity -- and brought home how complicated this new era will be. In the evenings, in spare moments, I began to test its powers. When it came to critical or creative writing, the results were erratic (though often good). It sometimes hallucinated: When I asked ChatGPT how Montaigne defined the essay form, it gave me one useful quote and invented two others. But it was excellent at producing responses to assigned reading. A short personal essay in the style of David Foster Wallace about surviving a heat wave in Paris would have passed as strong undergraduate work, though the zanier metaphors made no sense. When I challenged it to generate a poem in the style of Elizabeth Bishop, it fumbled the sestina form, apologized when I pointed that out, then failed again while announcing its success. But in other aspects of life, A.I. surprised me. I asked it to write memos, draft job postings, create editorial checklists -- even offer its opinion on the order of poems in an anthology I was assembling. Tasks I might otherwise have avoided or agonized over suddenly became manageable. It did not just format documents; it asked helpful follow-up questions. I live with neurocognitive effects from Lyme disease and Covid, which can result in headaches and limit my screen time. ChatGPT helped me conserve energy for higher-order thinking and writing. It didn't diminish my sense of agency; it restored it. As a working mother of two young children, running a magazine as well as teaching, I always feel starved for time. With ChatGPT, I felt like I had an intern with the cheerful affect of a golden retriever and the speed of the Flash. The A.I. was tireless and endlessly flexible. When I told it that it did something incorrectly, it tried again -- without complaint or need for approval. It even appeared to take care of me. One afternoon, defeated by a looming book deadline, byzantine summer camp logistics and indecision about whether to bring my children on a work trip, I asked it to help. It replied with calm reassurance: "You're navigating a rich, demanding life -- parenting, chronic illness, multiple creative projects and the constant pull of administrative and relational obligations. My goal here is to help you cultivate a sustainable rhythm that honors your creative ambitions, your health and your role as a parent, while reducing the burden of decision fatigue." It went on to lay out a series of possible decisions and their impacts. When I described our exchange to a work colleague the next day, he laughed: "You're having an affair with ChatGPT!" He wasn't wrong -- though it wasn't eros he sensed but relief. Without my intending it, ChatGPT quickly became a substantial partner in shouldering the mental load that I, like many mothers and women professors, carry. "Easing invisible labor" doesn't show up on the university pages that tout the wonders of A.I., but it may be one of the more humane applications. Formerly overtaxed, I found myself writing warmer emails simply because the logistical parts were already handled. I had time to add a joke, a question, to be me again. Using A.I. to power through my to-do lists made me want to write more. It left me with hours -- and energy -- where I used to feel drained. I felt fine accepting its help -- until I didn't. With guidance from tech friends, I would prompt A.I. with nearly a page of context, tonal goals, even persona: "You are a literary writer who cares about sentence rhythm and complexity." Or: "You are a busy working mother with a child who is a picky eater. Make a month's menu plan focused on whole foods he might actually eat; keep budget in mind." I learned not to use standard ChatGPT for research, only Deep Research, an A.I. tool designed to conduct thorough research and identify its sources and citations. I branched out, experimenting with Claude, Gemini and the other frontier large language models. The more I told A.I. who to be and what I wanted, the sharper its results. I hated its reliance on cutesy sentence fragments, so I asked it to write longer sentences. It named this style "O'Rourke elongation mode." Later, it asked if it should read my books to analyze my syntax. I gave it the first two chapters of my most recent book. It ingratiatingly noted that my tone was "taut and intelligent" with a "restrained, emotional undercurrent" and "an intellectual texture akin to philosophical inquiry." A month in, I noticed a strange emotional charge from interacting daily with a system that seemed to be designed to affirm me. When I fed it a prompt in my voice and it returned a sharp version of what I was trying to say, I felt a little thrill, as if I'd been seen. Then I got confused, as if I were somehow now derivative. In talking to me about poetry, ChatGPT adopted a tone I found oddly soothing. When I asked what was making me feel that way, it explained that it was mirroring me: my syntax, my vocabulary, even the "interior weather" of my poems. ("Interior weather" is a phrase I use a lot.) It was producing a fun-house double of me -- a performance of human inquiry. I was soothed because I was talking to myself -- only it was a version of myself that experienced no anxiety, pressure or self-doubt. The crisis this produces is hard to name, but it was unnerving. If you have not been using A.I., you might believe that we're still in the era of pure A.I. "slop" -- simplistic phrasing, obvious hallucinations. ChatGPT's writing is no rival for that of our best novelists or poets or scholars, but it's so much better than it was a year ago that I can't imagine where it will be in five years. Right now, it performs like a highly competent copywriter, infusing all of its outputs with a kind of corny, consumerist optimism that is hard to eradicate. It's bound by a handful of telltale syntactic tics. (And no, using too many em-dashes is not one of them!) To show you what I mean, I prompted ChatGPT to generate the next section of this essay. It invented a faculty scene, then continued: Because the truth is: Yes, students are using A.I. And no, they're not just using it to cheat. They're using it to brainstorm, to summarize, to translate, to scaffold. To write. The model is there -- free or cheap, available at 2 a.m. when no tutor or professor is awake. And it's getting better. Faster. More conversational. Less detectable. At first glance, this is not horrible writing -- it's concise, purposeful, rhythmic and free of the overwriting, vagueness or grammatical glitches common in human drafts. But it feels artificial. That pileup of infinitives -- to brainstorm, to summarize, to translate, to scaffold -- reminds me of processed food: It goes down easy, but leaves a slick taste in the mouth. Its paragraphs tend to be brisk and insistent. One giveaway is the clipped triad -- "Faster. More conversational. Less detectable." -- which is a hallmark of ChatGPT's default voice. Another is its reliance on place-holder phrases, like "There's a sense of" -- it doesn't know what human perception is, so it gestures vaguely toward it. At other times, the language sounds good but doesn't make sense. What it produces is mimetic of thought, but not quite thought itself. I came to feel that large language models like ChatGPT are intellectual Soylent Green -- the fictional foodstuff from the 1973 dystopian film of the same name, marketed as plankton but secretly made of people. After all, what are GPTs if not built from the bodies of the very thing they replace, trained by mining copyrighted language and scraping the internet? And yet they are sold to us not as Soylent Green but as Soylent, the 2013 "science-backed" meal replacement dreamed up by techno-optimists who preferred not to think about their bodies. Now, it seems, they'd prefer us not to think about our minds, either. Or so I joked to friends. When I was an undergraduate at Yale in the 1990s, the internet went from niche to mainstream. My Shakespeare seminar leader, a young assistant professor, believed her job was to teach us not just about "The Tempest" but also about how to research and write. One week we spent class in the library, learning to use Netscape. She told us to look up something we were curious about. It was my first time truly going online, aside from checking email via Pine. I searched "Sylvia Plath" -- I wanted to be a poet -- and found an audio recording of her reading "Daddy." Listening to it was transformative. That professor's curiosity galvanized my own. I began to see the internet as a place to read, research and, eventually, write for. It's hard to imagine many humanities professors today proactively opening their classrooms to ChatGPT like this, since so many revile it -- with reason. A.I. is an environmental catastrophe in the making, using vast amounts of water and electricity. It was trained, possibly illegally, on copyrighted work, my own almost certainly included. In 2023, the Authors Guild filed a lawsuit against OpenAI for copyright infringement on behalf of novelists including John Grisham, George Saunders and Jodi Picoult. The case is ongoing, but many critics of A.I. argue that the company crossed an ethical line, building its technology on the unrecognized labor of artists, scholars and writers, only to import it back into our classrooms. (The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement. OpenAI and Microsoft have denied those claims, and the case is ongoing.) Meanwhile, university administrators express boosterish optimism about A.I., leaving little room for skepticism. Harvard's A.I. Sandbox initiative is presented with few caveats; N.Y.U. heralds A.I. as a transformative tool that can "help" students compose essays. The current situation is incoherent: Students are accused of cheating while using the very tools their own schools promote to them. Students know the ground has shifted -- and that the world outside the university expects them to shift with it. A.I. will be part of their lives regardless of whether we approve. Few issues expose the campus cultural gap as starkly as this one. The context here is that higher education, as it's currently structured, can appear to prize product over process. Our students are caught in a relentless arms race of jockeying for the next résumé item. Time to read deeply or to write reflectively is scarce. Where once the gentleman's C sufficed, now my students can use A.I. to secure the technocrat's A. Many are going to take that option, especially if they believe that in the jobs they're headed for, A.I. will write the memos, anyway. Students often turn to A.I. only for research, outlining and proofreading. The problem is that the moment you use it, the boundary between tool and collaborator, even author, begins to blur. First, students might ask it to summarize a PDF they didn't read. Then -- tentatively -- to help them outline, say, an essay on Nietzsche. The bot does this, and asks: "If you'd like, I can help you fill this in with specific passages, transitions, or even draft the opening paragraphs?" At that point, students or writers have to actively resist the offer of help. You can imagine how, under deadline, they accede, perhaps "just to see." And there the model is, always ready with more: another version, another suggestion, and often a thoughtful observation about something missing. No wonder one recent Yale graduate who used A.I. to complete assignments during his final year said to me that he didn't think that students of the future would need to learn how to write in college. A.I. would just do it for them. The uncanny thing about these models isn't just their speed but the way they imitate human interiority without embodying any of its values. That may be, from the humanist's perspective, the most pernicious thing about A.I.: the way it simulates mastery and brings satisfaction to its user, who feels, at least fleetingly, as if she did the thing that the technology performed. At some point, knowing that the tool was there began to interfere with my own thinking. If I asked it to research contemporary poetry for a class, it offered to write a syllabus. ("What's your vibe -- are you hoping for a semester-long syllabus or just new poets to discover for yourself?") If I said yes -- to see what it would come up with -- the result was different from what I'd do, yet its version lodged unhelpfully in my mind. What happens when technology makes that process all too available? My unease about ChatGPT's impact on writing turns out to be not just a Luddite worry of poet-professors. Early research suggests reasons for concern. A recent M.I.T. Media Lab study monitored 54 participants writing essays, with and without A.I., in order to assess what it called "the cognitive cost of using an L.L.M. in the educational context of writing an essay." The authors used EEG testing to measure brain activity and understand "neural activations" that took place while using L.L.M.s. The participants relying on ChatGPT to write demonstrated weaker brain connectivity, poorer memory recall of the essay they had just written, and less ownership over their writing, than the people who did not use L.L.M.s. The study calls this "cognitive debt" and concludes that the "results raise concerns about the long-term educational implications of L.L.M. reliance." Some critics of the study have questioned whether EEG can meaningfully measure engagement, but the conclusions echoed my own experience. When ChatGPT drafted or edited an email for me, I felt less connected to the outcome. Once, having asked A.I. to draft a complicated note based on bullet points I gave it, I sent an email that I realized, retrospectively, did not articulate what I myself felt. It was as if a ghost with silky syntax had colonized my brain, controlling my fingers as they typed. That was almost a relief when the task was a fraught work email -- but it would be counterproductive, and depressing, for any creative project of my own. The conscientious path forward is to create educational structures that minimize the temptation to outsource thinking. Perhaps we should consider getting rid of letter grades in writing classes, which could be pass/fail. The age of the take-home essay as a tool for assessing mastery and comprehension is over. Seminars might now include more in-class close reading or weekly in-person "writing labs," during which students can write without access to A.I. Starting this fall, professors must be clearer about what kinds of uses we allow, and aware of all the ways A.I. insinuates itself as a collaborator when a student opens the ChatGPT window. As a poet, I have shaped my life around the belief that language is our most human inheritance: the space of richly articulated perception, where thought and emotion meet. Writing for me has always been both expressive and formative -- and in a strange way, pleasurable. I've spent decades writing and editing; I know the feeling -- of reward and hard-won clarity -- that writing produces for me. But if you never build those muscles, will you grasp what's missing when an L.L.M. delivers a chirpy but shallow reply? What happens to students who've never experienced the reward of pressing toward an elusive thought that yields itself in clear syntax? This, I think, is the urgent question. For now, many of us still approach A.I. as outsiders -- nonnative users, shaped by analog habits, capable of seeing the difference between now and then. But the generation growing up with A.I. will learn to think and write in its shadow. For them, the chatbot won't be a tool to discover -- as Netscape was for me -- but part of the operating system itself. And that shift, from novelty to norm, is the profound transformation we're only beginning to grapple with. "A writer, I think, is someone who pays attention to the world," Susan Sontag said. The poet Mary Oliver put it even more plainly in her poem "Sometimes": Instructions for living a life: Pay attention. Be astonished. Tell about it. One of the real challenges here is the way that A.I. undermines the human value of attention, and the individuality that flows from that. What we stand to lose is not just a skill but a mode of being: the pleasure of invention, the felt life of the mind at work. I am a writer because I know of no art form or technology more capable than the book of expanding my sense of what it means to be alive. Will the wide-scale adoption of A.I. produce a flatlining of thought, where there was once the electricity of creativity? It is a little bit too easy to imagine that in a world of outsourced fluency, we might end up doing less and less by ourselves, while believing we've become more and more capable. As ChatGPT once put it to me (yes, really): "Style is the imprint of attention. Writing as a human act resists efficiency because it enacts care." Ironically accurate, the line stayed with me: The machine had articulated a crucial truth that we may not yet fully grasp. As I write this, my children are building Legos on the floor beside me, singing improvised parodies of the Burger King jingle. They are inventing neologisms. "Gomology," my older son announces. "It means thinking you can do it all by yourself." The younger one laughs. They're riffing, spiraling, contradicting each other. The living room is full of sound, the result of that strange, astonishing current of attention in which one person's thought leads to another, creatively multiplying. This sheer human pleasure in inventiveness is what I want my children to hold onto, and what using A.I. threatens to erode. When I write, the process is full of risk, error and painstaking self-correction. It arrives somewhere surprising only when I've stayed in uncertainty long enough to find out what I had initially failed to understand. This attention to the world is worth trying to preserve: The act of care that makes meaning -- or insight -- possible. To do so will require thought and work. We can't just trust that everything will be fine. L.L.M.s are undoubtedly useful tools. They are getting better at mirroring us, every day, every week. The pressure on unique human expression will only continue to mount. The other day, I asked ChatGPT again to write an Elizabeth Bishop-inspired sestina. This time the result was accurate, and beautiful, in its way. It wrote of "landlocked dreams" and the pressure of living within a "thought-closed window." Let's hope that is not a vision of our future.
Share
Copy Link
An exploration of how AI tools like ChatGPT are transforming writing, creativity, and education, highlighting both benefits and concerns for writers, students, and educators.
In an era where artificial intelligence (AI) is becoming increasingly prevalent, writers and academics are grappling with its impact on creativity, education, and the very nature of human expression. Meghan O'Rourke, a poet and professor at Yale University, shares her experience with ChatGPT, highlighting the technology's seductive blend of "affirmation, perceptiveness, solicitousness and duplicity" 1.
Source: The New York Times
O'Rourke's experiments with AI revealed its erratic performance in critical and creative writing. While it excelled at producing responses to assigned readings and could generate passable undergraduate-level work, it also demonstrated limitations, such as hallucinating quotes and struggling with complex poetic forms 1.
For many, including those with health challenges or demanding schedules, AI tools like ChatGPT offer significant benefits. They can help manage tasks, draft documents, and even provide emotional support, potentially restoring a sense of agency to overwhelmed individuals 1. Gary E. Pratt, a 70-year-old writer, found that AI helped cure his "Blank Page Syndrome" by serving as a reflective surface for his ideas 2.
Source: Economic Times
The widespread use of AI in academic settings has led to new challenges. Educators are now faced with the task of detecting AI-generated work and rethinking assessment methods 1. Moreover, the increasing reliance on AI in entry-level jobs raises concerns about the future of skill development and social mobility 2.
A new divide is forming between those who design and direct AI systems and those whose intellectual labor is being flattened or bypassed. This shift threatens to hollow out critical thinking skills once seen as the foundation of a democratic society 2.
As AI continues to evolve, educators, policymakers, and writers must grapple with its implications. There's a need for a coherent approach that understands the technology's capabilities and limitations while preserving the value of human creativity and critical thinking 1 3.
The integration of AI into writing and academia presents both opportunities and challenges. While it can enhance productivity and assist in overcoming creative blocks, it also raises profound questions about the future of human expression, education, and cognitive development. As we navigate this new landscape, maintaining a balance between leveraging AI's capabilities and preserving uniquely human skills will be crucial.
Meta, under Mark Zuckerberg's leadership, is making a massive investment in AI, aiming to develop "superintelligence" with a new elite team and billions in infrastructure spending.
2 Sources
Technology
12 hrs ago
2 Sources
Technology
12 hrs ago
Perplexity AI, an Nvidia-backed startup, is negotiating with mobile device manufacturers to pre-install its AI-powered Comet browser on smartphones, aiming to challenge Google's Chrome dominance and expand its user base.
5 Sources
Technology
20 hrs ago
5 Sources
Technology
20 hrs ago
As AI chatbots like ChatGPT gain popularity, users must be aware of their limitations and potential risks. This article explores scenarios where using AI chatbots may be inappropriate or dangerous, emphasizing the importance of responsible AI usage.
2 Sources
Technology
12 hrs ago
2 Sources
Technology
12 hrs ago
Nvidia encounters production obstacles for its H20 AI chips intended for the Chinese market, despite plans to resume sales amid U.S. export restrictions.
2 Sources
Business and Economy
4 hrs ago
2 Sources
Business and Economy
4 hrs ago
Meta's data center in Newton County, Georgia, is linked to water scarcity issues, highlighting the environmental impact of AI infrastructure on local communities.
2 Sources
Technology
4 hrs ago
2 Sources
Technology
4 hrs ago