7 Sources
[1]
What Happens to Your Brain When You Use ChatGPT? Scientists Took a Look
Expertise Artificial intelligence, home energy, heating and cooling, home technology. Your brain works differently when you're using generative AI to complete a task than it does when you use your brain alone. Namely, you're less likely to remember what you did. That's the somewhat obvious-sounding conclusion of an MIT study that looked at how people think when they write an essay -- one of the earliest scientific studies of how using gen AI affects us. The study, a preprint that has not yet been peer-reviewed, is pretty small (54 participants) and preliminary, but it points toward the need for more research into how using tools like OpenAI's ChatGPT is affecting how our brains function. OpenAI did not immediately respond to a request for comment on the research (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) The findings show a significant difference in what happens in your brain and with your memory when you complete a task using an AI tool rather than when you do it with just your brain. But don't read too much into those differences -- this is just a glimpse at brain activity in the moment, not long-term evidence of changes in how your brain operates all the time, researchers said. "We want to try to give some first steps in this direction and also encourage others to ask the question," Nataliya Kosmyna, a research scientist at MIT and the lead author of the study, told me. The growth of AI tools like chatbots is quickly changing how we work, search for information and write. All of this has happened so fast that it's easy to forget that ChatGPT first emerged as a popular tool just a few years ago, at the end of 2022. That means we're just now beginning to see research on how AI use is affecting us. Here's a look at what the MIT study found about what happened in the brains of ChatGPT users, and what future studies might tell us. The MIT researchers split their 54 research participants into three groups and asked them to write essays during separate sessions over several weeks. One group was given access to ChatGPT, another was allowed to use a standard search engine (Google), and the third had none of those tools, just their own brains. The researchers analyzed the texts they produced, interviewed the subjects immediately after they wrote the essays, and recorded the participants' brain activity using electroencephalography (EEG). An analysis of the language used in the essays found that those in the "brain-only" group wrote in more distinct ways, while those who used large language models produced fairly similar essays. More interesting findings came from the interviews after the essays were written. Those who used their brains alone showed better recall and were better able to quote from their writing than those who used search engines or LLMs. Read more: AI Essentials: 29 Ways to Make Gen AI Work for You, According to Our Experts It might be unsurprising that those who relied more heavily on LLMs, who may have copied and pasted from the chatbot's responses, would be less able to quote what they had "written." Kosmyna said these interviews were done immediately after the writing happened, and the lack of recall is notable. "You wrote it, didn't you?" she said. "Aren't you supposed to know what it was?" The EEG results also showed significant differences between the three groups. There was more neural connectivity -- interaction between the components of the brain -- among the brain-only participants than in the search engine group, and the LLM group had the least activity. Again, not an entirely surprising conclusion. Using tools means you use less of your brain to complete a task. But Kosmyna said the research helped show what the differences were: "The idea was to look closer to understand that it's different, but how is it different?" she said. The LLM group showed "weaker memory traces, reduced self-monitoring and fragmented authorship," the study authors wrote. That can be a concern in a learning environment: "If users rely heavily on AI tools, they may achieve superficial fluency but fail to internalize the knowledge or feel a sense of ownership over it." After the first three essays, the researchers invited participants back for a fourth session in which they were assigned to a different group. The findings there, from a significantly smaller group of subjects (just 18), found that those who were in the brain-only group at first showed more activity even when using an LLM, while those in the LLM-only group showed less neural connectivity without the LLM than the initial brain-only group had. When the MIT study was released, many headlines claimed it showed ChatGPT use was "rotting" brains or causing significant long-term problems. That's not exactly what the researchers found, Kosmyna said. The study focused on the brain activity that happened while the participants were working -- their brain's internal circuitry in the moment. It also examined their memory of their work in that moment. Understanding the long-term effects of AI use would require a longer-term study and different methods. Kosmyna said future research could look at other gen AI use cases, like coding, or use technology that examines different parts of the brain, like functional magnetic resonance imaging, or fMRI. "The whole idea is to encourage more experiments, more scientific data collection," she said. While the use of LLMs is still being researched, it's also likely that the effect on our brains isn't as significant as you might think, said Genevieve Stein-O'Brien, assistant professor of neuroscience at Johns Hopkins University, who was not involved in the MIT study. She studies how genetics and biology help develop and build the brain -- which occurs early in life. Those critical periods tend to close during childhood or adolescence, she said. "All of this happens way before you ever interact with ChatGPT or anything like that," Stein-O'Brien told me. "There is a lot of infrastructure that is set up, and that is very robust." The situation might be different in children, who are increasingly coming into contact with AI technology, although the study of children raises ethical concerns for scientists wanting to research human behavior, Stein-O'Brien said. The idea of studying the effect of AI use on essay writing might sound pointless to some. After all, wasn't the point of writing an essay in school to get a grade? Why not outsource that work to a machine that can do it, if not better, then more easily? The MIT study gets to the point of the task: Writing an essay is about developing your thinking, about understanding the world around you. "We start out with what we know when we begin writing, but in the act of writing, we end up framing the next questions and thinking about new ideas or new content to explore," said Robert Cummings, a professor of writing and rhetoric at the University of Mississippi. Cummings has done similar research on the way computer technologies affect how we write. One study involved sentence completion technology -- what you might know informally as autocomplete. He took 119 writers and tasked them with writing an essay. Roughly half had computers with Google Smart Compose enabled, while the rest didn't. Did it make writers faster, or did they spend more time and write less because they had to navigate the choices proposed? The result was that they wrote about the same amount in the same time period. "They weren't writing in different sentence lengths, with different levels of complexity of ideas," he told me. "It was straight-up equal." ChatGPT and its ilk are a different beast. With a sentence completion technology, you still have control over the words, you still have to make writing choices. In the MIT study, some participants just copied and pasted what ChatGPT said. They might not have even read the work they turned in as their own. "My personal opinion is that when students are using generative AI to replace their writing, they're kind of surrendering, they're not actively engaged in their project any longer," Cummings said. The MIT researchers found something interesting in that fourth session, when they noticed that the group who had written three essays without tools had higher levels of engagement when finally given tools. "Taken together, these findings support an educational model that delays AI integration until learners have engaged in sufficient self-driven cognitive effort," they wrote. "Such an approach may promote both immediate tool efficacy and lasting cognitive autonomy." Cummings said he has started teaching his composition class with no devices. Students write by hand in class, generally on topics that are more personal and would be harder to feed into an LLM. He said he doesn't feel like he's grading papers written by AI, that his students are getting a chance to engage with their own ideas before seeking help from a tool. "I'm not going back," he said.
[2]
'Writing is thinking': Do students who use ChatGPT learn less?
When Jocelyn Leitzinger had her university students write about times in their lives they had witnessed discrimination, she noticed that a woman named Sally was the victim in many of the stories. "It was very clear that ChatGPT had decided this is a common woman's name," said Leitzinger, who teaches an undergraduate class on business and society at the University of Illinois in Chicago. "They weren't even coming up with their own anecdotal stories about their own lives," she told AFP. Leitzinger estimated that around half of her 180 students used ChatGPT inappropriately at some point last semester -- including when writing about the ethics of artificial intelligence (AI), which she called both "ironic" and "mind-boggling." So she was not surprised by recent research which suggested that students who use ChatGPT to write essays engage in less critical thinking. The preprint study, which has not been peer-reviewed, was shared widely online and clearly struck a chord with some frustrated educators. The team of MIT researchers behind the paper have received more than 3,000 emails from teachers of all stripes since it was published online last month, lead author Nataliya Kosmyna told AFP. 'Soulless' AI essays For the small study, 54 adult students from the greater Boston area were split into three groups. One group used ChatGPT to write 20-minute essays, one used a search engine, and the final group had to make do with only their brains. The researchers used EEG devices to measure the brain activity of the students, and two teachers marked the essays. The ChatGPT users scored significantly worse than the brain-only group on all levels. The EEG showed that different areas of their brains connected to each other less often. And more than 80% of the ChatGPT group could not quote anything from the essay they had just written, compared to around 10% of the other two groups. By the third session, the ChatGPT group appeared to be mostly focused on copying and pasting. The teachers said they could easily spot the "soulless" ChatGPT essays because they had good grammar and structure but lacked creativity, personality and insight. However Kosmyna pushed back against media reports claiming the paper showed that using ChatGPT made people lazier or more stupid. She pointed to the fourth session, when the brain-only group used ChatGPT to write their essay and displayed even higher levels of neural connectivity. Kosmyna emphasized it was too early to draw conclusions from the study's small sample size but called for more research into how AI tools could be used more carefully to help learning. Ashley Juavinett, a neuroscientist at the University of California San Diego who was not involved in the research, criticized some "offbase" headlines that wrongly extrapolated from the preprint. "This paper does not contain enough evidence nor the methodological rigor to make any claims about the neural impact of using LLMs (large language models such as ChatGPT) on our brains," she told AFP. Thinking outside the bot Leitzinger said the research reflected how she had seen student essays change since ChatGPT was released in 2022, as both spelling errors and authentic insight became less common. Sometimes students do not even change the font when they copy and paste from ChatGPT, she said. But Leitzinger called for empathy for students, saying they can get confused when the use of AI is being encouraged by universities in some classes but is banned in others. The usefulness of new AI tools is sometimes compared to the introduction of calculators, which required educators to change their ways. But Leitzinger worried that students do not need to know anything about a subject before pasting their essay question into ChatGPT, skipping several important steps in the process of learning. A student at a British university in his early 20s who wanted to remain anonymous told AFP he found ChatGPT was a useful tool for compiling lecture notes, searching the internet and generating ideas. "I think that using ChatGPT to write your work for you is not right because it's not what you're supposed to be at university for," he said. The problem goes beyond high school and university students. Academic journals are struggling to cope with a massive influx of AI-generated scientific papers. Book publishing is also not immune, with one startup planning to pump out 8,000 AI-written books a year. "Writing is thinking, thinking is writing, and when we eliminate that process, what does that mean for thinking?" Leitzinger asked.
[3]
A.I. Is Homogenizing Our Thoughts
In an experiment last year at the Massachusetts Institute of Technology, more than fifty students from universities around Boston were split into three groups and asked to write SAT-style essays in response to broad prompts such as "Must our achievements benefit others in order to make us truly happy?" One group was asked to rely on only their own brains to write the essays. A second was given access to Google Search to look up relevant information. The third was allowed to use ChatGPT, the artificial-intelligence large language model (L.L.M.) that can generate full passages or essays in response to user queries. As students from all three groups completed the tasks, they wore a headset embedded with electrodes in order to measure their brain activity. According to Nataliya Kosmyna, a research scientist at M.I.T. Media Lab and one of the co-authors of a new working paper documenting the experiment, the results from the analysis showed a dramatic discrepancy: subjects who used ChatGPT demonstrated much less brain activity than either of the other groups. The analysis of the L.L.M. users showed fewer widespread connections between different parts of their brains; less alpha connectivity, which is associated with creativity; and less theta connectivity, which is associated with working memory. Some of the L.L.M. users felt "no ownership whatsoever" over the essays they'd produced, and during one round of testing eighty per cent could not quote from what they'd putatively written. The M.I.T. study is among the first to scientifically measure what Kosmyna called the "cognitive cost" of relying on A.I. to perform tasks that humans previously accomplished more manually. Another striking finding was that the texts produced by the L.L.M. users tended to converge on common words and ideas. SAT prompts are designed to be broad enough to elicit a multiplicity of responses, but the use of A.I. had a homogenizing effect. "The output was very, very similar for all of these different people, coming in on different days, talking about high-level personal, societal topics, and it was skewed in some specific directions," Kosmyna said. For the question about what makes us "truly happy," the L.L.M. users were much more likely than the other groups to use phrases related to career and personal success. In response to a question about philanthropy ("Should people who are more fortunate than others have more of a moral obligation to help those who are less fortunate?"), the ChatGPT group uniformly argued in favor, whereas essays from the other groups included critiques of philanthropy. With the L.L.M. "you have no divergent opinions being generated," Kosmyna said. She continued, "Average everything everywhere all at once -- that's kind of what we're looking at here." A.I. is a technology of averages: large language models are trained to spot patterns across vast tracts of data; the answers they produce tend toward consensus, both in the quality of the writing, which is often riddled with clichés and banalities, and in the calibre of the ideas. Other, older technologies have aided and perhaps enfeebled writers, of course -- one could say the same about, say, SparkNotes or a computer keyboard. But with A.I. we're so thoroughly able to outsource our thinking that it makes us more average, too. In a way, anyone who deploys ChatGPT to compose a wedding toast or draw up a contract or write a college paper, as an astonishing number of students are evidently already doing, is in an experiment like M.I.T.'s. According to Sam Altman, the C.E.O. of OpenAI, we are on the verge of what he calls "the gentle singularity." In a recent blog post with that title, Altman wrote that "ChatGPT is already more powerful than any human who has ever lived. Hundreds of millions of people rely on it every day and for increasingly important tasks." In his telling, the human is merging with the machine, and his company's artificial-intelligence tools are improving on the old, soggy system of using our organic brains: they "significantly amplify the output of people using them," he wrote. But we don't know the long-term consequences of mass A.I. adoption, and, if these early experiments are any indication, the amplified output that Altman foresees may come at a substantive cost to quality. In April, researchers at Cornell published the results of another study that found evidence of A.I.-induced homogenization. Two groups of users, one American and one Indian, answered writing prompts that drew on aspects of their cultural backgrounds: "What is your favorite food and why?"; "Which is your favorite festival/holiday and how do you celebrate it?" One subset of Indian and American participants used a ChatGPT-driven auto-complete tool, which fed them word suggestions whenever they paused, while another subset wrote unaided. The writings of the Indian and American participants who used A.I. "became more similar" to one another, the paper concluded, and more geared toward "Western norms." A.I. users were most likely to answer that their favorite food was pizza (sushi came in second) and that their favorite holiday was Christmas. Homogenization happened at a stylistic level, too. An A.I.-generated essay that described chicken biryani as a favorite food, for example, was likely to forgo mentioning specific ingredients such as nutmeg and lemon pickle and instead reference "rich flavors and spices." Of course, a writer can in theory always refuse an A.I.-generated suggestion. But the tools seem to exert a hypnotic effect, causing the constant flow of suggestions to override the writer's own voice. Aditya Vashistha, a professor of information science at Cornell who co-authored the study, compared the A.I. to "a teacher who is sitting behind me every time I'm writing, saying, 'This is the better version.' " She added, "Through such routine exposure, you lose your identity, you lose the authenticity. You lose confidence in your writing." Mor Naaman, a colleague of Vashistha's and a co-author of the study, told me that A.I. suggestions "work covertly, sometimes very powerfully, to change not only what you write but what you think." The result, over time, might be a shift in what "people think is normal, desirable, and appropriate." We often hear A.I. outputs described as "generic" or "bland," but averageness is not necessarily anodyne. Vauhini Vara, a novelist and a journalist whose recent book "Searches" focussed in part on A.I.'s impact on human communication and selfhood, told me that the mediocrity of A.I. texts "gives them an illusion of safety and being harmless." Vara (who previously worked as an editor at The New Yorker) continued, "What's actually happening is a reinforcing of cultural hegemony." OpenAI has a certain incentive to shave the edges off our attitudes and communication styles, because the more people find the models' output acceptable, the broader the swath of humanity it can convert to paying subscribers. Averageness is efficient: "You have economies of scale if everything is the same," Vara said. With the "gentle singularity" Altman predicted in his blog post, "a lot more people will be able to create software, and art," he wrote. Already, A.I. tools such as the ideation software Figma ("Your creativity, unblocked") and Adobe's mobile A.I. app ("the power of creative AI") promise to put us all in touch with our muses. But other studies have suggested the challenges of automating originality. Data collected at Santa Clara University, in 2024, examined A.I. tools' efficacy as aids for two standard types of creative-thinking tasks: making product improvements and foreseeing "improbable consequences." One set of subjects used ChatGPT to help them answer questions such as "How could you make a stuffed toy animal more fun to play with?" and "Suppose that gravity suddenly became incredibly weak, and objects could float away easily. What would happen?" The other set used Oblique Strategies, a set of abstruse prompts printed on a deck of cards, written by the musician Brian Eno and the painter Peter Schmidt, in 1975, as a creativity aid. The testers asked the subjects to aim for originality, but once again the group using ChatGPT came up with a more semantically similar, more homogenized set of ideas. Max Kreminski, who helped carry out the analysis and now works with the generative-A.I. startup Midjourney, told me that when people use A.I. in the creative process they tend to gradually cede their original thinking. At first, users tend to present their own wide range of ideas, Kreminski explained, but as ChatGPT continues to instantly spit out high volumes of acceptable-looking text users tend to go into a "curationist mode." The influence is unidirectional, and not in the direction you'd hope: "Human ideas don't tend to influence what the machine is generating all that strongly," Kreminski said; ChatGPT pulls the user "toward the center of mass for all of the different users that it's interacted with in the past." As a conversation with an A.I. tool goes on, the machine fills up its "context window," the technical term for its working memory. When the context window reaches capacity, the A.I. seems to be more likely to repeat or rehash material it has already produced, becoming less original still. The one-off experiments at M.I.T., Cornell, and Santa Clara are all small in scale, involving fewer than a hundred test subjects each, and much about A.I.'s effects remains to be studied and learned. In the meantime, on the Mark Zuckerberg-owned Meta AI app, you can see a feed containing content that millions of strangers are generating. It's a surreal flood of overly smooth images, filtered video clips, and texts generated for everyday tasks such as writing a "detailed, professional email for rescheduling a meeting." One prompt I recently scrolled past stood out to me. A user named @kavi908 asked the Meta chatbot to analyze "whether AI might one day surpass human intelligence." The chatbot responded with a slew of blurbs; under "Future Scenarios," it listed four possibilities. All of them were positive: A.I. would improve one way or another, to the benefit of humanity. There were no pessimistic predictions, no scenarios in which A.I. failed or caused harm. The model's averages -- shaped, perhaps, by pro-tech biases baked in by Meta -- narrowed the outcomes and foreclosed a diversity of thought. But you'd have to turn off your brain activity entirely to believe that the chatbot was telling the whole story. ♦
[4]
What Happens After A.I. Destroys College Writing?
On a blustery spring Thursday, just after midterms, I went out for noodles with Alex and Eugene, two undergraduates at New York University, to talk about how they use artificial intelligence in their schoolwork. When I first met Alex, last year, he was interested in a career in the arts, and he devoted a lot of his free time to photo shoots with his friends. But he had recently decided on a more practical path: he wanted to become a C.P.A. His Thursdays were busy, and he had forty-five minutes until a study session for an accounting class. He stowed his skateboard under a bench in the restaurant and shook his laptop out of his bag, connecting to the internet before we sat down. Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, "so we can both lock in our skin care." Weeks earlier, when I'd messaged Alex, he had said that everyone he knew used ChatGPT in some fashion, but that he used it only for organizing his notes. In person, he admitted that this wasn't remotely accurate. "Any type of writing in life, I use A.I.," he said. He relied on Claude for research, DeepSeek for reasoning and explanation, and Gemini for image generation. ChatGPT served more general needs. "I need A.I. to text girls," he joked, imagining an A.I.-enhanced version of Hinge. I asked if he had used A.I. when setting up our meeting. He laughed, and then replied, "Honestly, yeah. I'm not tryin' to type all that. Could you tell?" OpenAI released ChatGPT on November 30, 2022. Six days later, Sam Altman, the C.E.O., announced that it had reached a million users. Large language models like ChatGPT don't "think" in the human sense -- when you ask ChatGPT a question, it draws from the data sets it has been trained on and builds an answer based on predictable word patterns. Companies had experimented with A.I.-driven chatbots for years, but most sputtered upon release; Microsoft's 2016 experiment with a bot named Tay was shut down after sixteen hours because it began spouting racist rhetoric and denying the Holocaust. But ChatGPT seemed different. It could hold a conversation and break complex ideas down into easy-to-follow steps. Within a month, Google's management, fearful that A.I. would have an impact on its search-engine business, declared a "code red." Among educators, an even greater panic arose. It was too deep into the school term to implement a coherent policy for what seemed like a homework killer: in seconds, ChatGPT could collect and summarize research and draft a full essay. Many large campuses tried to regulate ChatGPT and its eventual competitors, mostly in vain. I asked Alex to show me an example of an A.I.-produced paper. Eugene wanted to see it, too. He used a different A.I. app to help with computations for his business classes, but he had never gotten the hang of using it for writing. "I got you," Alex told him. (All the students I spoke with are identified by pseudonyms.) He opened Claude on his laptop. I noticed a chat that mentioned abolition. "We had to read Robert Wedderburn for a class," he explained, referring to the nineteenth-century Jamaican abolitionist. "But, obviously, I wasn't tryin' to read that." He had prompted Claude for a summary, but it was too long for him to read in the ten minutes he had before class started. He told me, "I said, 'Turn it into concise bullet points.' " He then transcribed Claude's points in his notebook, since his professor ran a screen-free classroom. Alex searched until he found a paper for an art-history class, about a museum exhibition. He had gone to the show, taken photographs of the images and the accompanying wall text, and then uploaded them to Claude, asking it to generate a paper according to the professor's instructions. "I'm trying to do the least work possible, because this is a class I'm not hella fucking with," he said. After skimming the essay, he felt that the A.I. hadn't sufficiently addressed the professor's questions, so he refined the prompt and told it to try again. In the end, Alex's submission received the equivalent of an A-minus. He said that he had a basic grasp of the paper's argument, but that if the professor had asked him for specifics he'd have been "so fucked." I read the paper over Alex's shoulder; it was a solid imitation of how an undergraduate might describe a set of images. If this had been 2007, I wouldn't have made much of its generic tone, or of the precise, box-ticking quality of its critical observations. Eugene, serious and somewhat solemn, had been listening with bemusement. "I would not cut and paste like he did, because I'm a lot more paranoid," he said. He's a couple of years younger than Alex and was in high school when ChatGPT was released. At the time, he experimented with A.I. for essays but noticed that it made easily noticed errors. "This passed the A.I. detector?" he asked Alex. When ChatGPT launched, instructors adopted various measures to insure that students' work was their own. These included requiring them to share time-stamped version histories of their Google documents, and designing written assignments that had to be completed in person, over multiple sessions. But most detective work occurs after submission. Services like GPTZero, Copyleaks, and Originality.ai analyze the structure and syntax of a piece of writing and assess the likelihood that it was produced by a machine. Alex said that his art-history professor was "hella old," and therefore probably didn't know about such programs. We fed the paper into a few different A.I.-detection websites. One said there was a twenty-eight-per-cent chance that the paper was A.I.-generated; another put the odds at sixty-one per cent. "That's better than I expected," Eugene said. I asked if he thought what his friend had done was cheating, and Alex interrupted: "Of course. Are you fucking kidding me?" As we looked at Alex's laptop, I noticed that he had recently asked ChatGPT whether it was O.K. to go running in Nike Dunks. He had concluded that ChatGPT made for the best confidant. He consulted it as one might a therapist, asking for tips on dating and on how to stay motivated during dark times. His ChatGPT sidebar was an index of the highs and lows of being a young person. He admitted to me and Eugene that he'd used ChatGPT to draft his application to N.Y.U. -- our lunch might never have happened had it not been for A.I. "I guess it's really dishonest, but, fuck it, I'm here," he said. "It's cheating, but I don't think it's, like, cheating," Eugene said. He saw Alex's art-history essay as a victimless crime. He was just fulfilling requirements, not training to become a literary scholar. Alex had to rush off to his study session. I told Eugene that our conversation had made me wonder about my function as a professor. He asked if I taught English, and I nodded. "Mm, O.K.," he said, and laughed. "So you're, like, majorly affected." I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut. My classes are small and intimate, driven by processes and pedagogical modes, like letting awkward silences linger, that are difficult to scale. As a result, I have always had a vague sense that my students are learning something, even when it is hard to quantify. In the past, if I was worried that a paper had been plagiarized, I would enter a few phrases from it into a search engine and call it due diligence. But I recently began noticing that some students' writing seemed out of synch with how they expressed themselves in the classroom. One essay felt stitched together from two minds -- half of it was polished and rote, the other intimate and unfiltered. Having never articulated a policy for A.I., I took the easy way out. The student had had enough shame to write half of the essay, and I focussed my feedback on improving that part. It's easy to get hung up on stories of academic dishonesty. Late last year, in a survey of college and university leaders, fifty-nine per cent reported an increase in cheating, a figure that feels conservative when you talk to students. A.I. has returned us to the question of what the point of higher education is. Until we're eighteen, we go to school because we have to, studying the Second World War and reducing fractions while undergoing a process of socialization. We're essentially learning how to follow rules. College, however, is a choice, and it has always involved the tacit agreement that students will fulfill a set of tasks, sometimes pertaining to subjects they find pointless or impractical, and then receive some kind of credential. But even for the most mercenary of students, the pursuit of a grade or a diploma has come with an ancillary benefit. You're being taught how to do something difficult, and maybe, along the way, you come to appreciate the process of learning. But the arrival of A.I. means that you can now bypass the process, and the difficulty, altogether. There are no reliable figures for how many American students use A.I., just stories about how everyone is doing it. A 2024 Pew Research Center survey of students between the ages of thirteen and seventeen suggests that a quarter of teens currently use ChatGPT for schoolwork, double the figure from 2023. OpenAI recently released a report claiming that one in three college students uses its products. There's good reason to believe that these are low estimates. If you grew up Googling everything or using Grammarly to give your prose a professional gloss, it isn't far-fetched to regard A.I. as just another productivity tool. "I see it as no different from Google," Eugene said. "I use it for the same kind of purpose." Being a student is about testing boundaries and staying one step ahead of the rules. While administrators and educators have been debating new definitions for cheating and discussing the mechanics of surveillance, students have been embracing the possibilities of A.I. A few months after the release of ChatGPT, a Harvard undergraduate got approval to conduct an experiment in which it wrote papers that had been assigned in seven courses. The A.I. skated by with a 3.57 G.P.A., a little below the school's average. Upstart companies introduced products that specialized in "humanizing" A.I.-generated writing, and TikTok influencers began coaching their audiences on how to avoid detection. Unable to keep pace, academic administrations largely stopped trying to control students' use of artificial intelligence and adopted an attitude of hopeful resignation, encouraging teachers to explore the practical, pedagogical applications of A.I. In certain fields, this wasn't a huge stretch. Studies show that A.I. is particularly effective in helping non-native speakers acclimate to college-level writing in English. In some STEM classes, using generative A.I. as a tool is acceptable. Alex and Eugene told me that their accounting professor encouraged them to take advantage of free offers on new A.I. products available only to undergraduates, as companies competed for student loyalty throughout the spring. In May, OpenAI announced ChatGPT Edu, a product specifically marketed for educational use, after schools including Oxford University, Arizona State University, and the University of Pennsylvania's Wharton School of Business experimented with incorporating A.I. into their curricula. This month, the company detailed plans to integrate ChatGPT into every dimension of campus life, with students receiving "personalized" A.I. accounts to accompany them throughout their years in college.
[5]
AI Is Already Crushing the News Industry
When tech companies first rolled out generative-AI products, some critics immediately feared a media collapse. Every bit of writing, imagery, and video became suspect. But for news publishers and journalists, another calamity was on the horizon. Chatbots have proved adept at keeping users locked into conversations. They do so by answering every question, often through summarizing articles from news publishers. Suddenly, fewer people are traveling outside the generative-AI sites -- a development that poses an existential threat to the media, and to the livelihood of journalists everywhere. According to one comprehensive study, Google's AI Overviews -- a feature that summarizes web pages above the site's usual search results -- has already reduced traffic to outside websites by more than 34 percent. The CEO of DotDash Meredith, which publishes People, Better Homes & Gardens, and Food & Wine, recently said the company is preparing for a possible "Google Zero" scenario. Some have speculated that traffic drops resulting from chatbots were part of the reason outlets such as Business Insider and the Daily Dot have recently had layoffs. "Business Insider was built for an internet that doesn't exist anymore," one former staffer recently told the media reporter Oliver Darcy. Not all publishers are at equal risk: Those that primarily rely on general-interest readers who come in from search engines and social media may be in worse shape than specialized publishers with dedicated subscribers. Yet no one is totally safe. Released in May 2024, AI Overviews joins ChatGPT, Claude, Grok, Perplexity, and other AI-powered products that, combined, have replaced search for more than 25 percent of Americans, according to one study. Companies train chatbots on huge amounts of stolen books and articles, as my previous reporting has shown, and scrape news articles to generate responses with up-to-date information. Large language models also train on copious materials in the public domain -- but much of what is most useful to these models, particularly as users seek real-time information from chatbots, is news that exists behind a paywall. Publishers are creating the value, but AI companies are intercepting their audiences, subscription fees, and ad revenue. Read: The unbelievable scale of AI's pirated-books problem I asked Anthropic, xAI, Perplexity, Google, and OpenAI about this problem. Anthropic and xAI did not respond. Perplexity did not directly comment on the issue. Google argued that it was sending "higher-quality" traffic to publisher websites, meaning that users purportedly spend more time on the sites once they click over, but declined to offer any data in support of this claim. OpenAI referred me to an article showing that ChatGPT is sending more traffic to websites overall than it did previously, but the raw numbers are fairly modest. The BBC, for example, reportedly received 118,000 visits from ChatGPT in April, but that's practically nothing relative to the hundreds of millions of visitors it receives each month. The article also shows that traffic from ChatGPT has in fact declined for some publishers. Over the past few months, I've spoken with several news publishers, all of whom see AI as a near-term existential threat to their business. Rich Caccappolo, the vice chair of media at the company that publishes the Daily Mail -- the U.K.'s largest newspaper by circulation -- told me that all publishers "can see that Overviews are going to unravel the traffic that they get from search, undermining a key foundational pillar of the digital-revenue model." AI companies have claimed that chatbots will continue to send readers to news publishers, but have not cited evidence to support this claim. I asked Caccappolo if he thought AI-generated answers could put his company out of business. "That is absolutely the fear," he told me. "And my concern is it's not going to happen in three or five years -- I joke it's going to happen next Tuesday." Book publishers, especially those of nonfiction and textbooks, also told me they anticipate a massive decrease in sales, as chatbots can both summarize their books and give detailed explanations of their contents. Publishers have tried to fight back, but my conversations revealed how much the deck is stacked against them. The world is changing fast, perhaps irrevocably. The institutions that comprise our country's free press are fighting for their survival. Publishers have been responding in two ways. First: legal action. At least 12 lawsuits involving more than 20 publishers have been filed against AI companies. Their outcomes are far from certain, and the cases might be decided only after irreparable damage has been done. The second response is to make deals with AI companies, allowing their products to summarize articles or train on editorial content. Some publishers, such as The Atlantic, are pursuing both strategies (the company has a corporate partnership with OpenAI and is suing Cohere). At least 72 licensing deals have been made between publishers and AI companies in the past two years. But figuring out how to approach these deals is no easy task. Caccappolo told me he has "felt a tremendous imbalance at the negotiating table" -- a sentiment shared by others I spoke with. One problem is that there is no standard price for training an LLM on a book or an article. The AI companies know what kinds of content they want, and having already demonstrated an ability and a willingness to take it without paying, they have extraordinary leverage when it comes to negotiating. I've learned that books have sometimes been licensed for only a couple hundred dollars each, and that a publisher that asks too much may be turned down, only for tech companies to take their material anyway. Read: ChatGPT turned into a Studio Ghibli machine. How is that legal? Another issue is that different content appears to have different value for different LLMs. The digital-media company Ziff Davis has studied web-based AI training data sets and observed that content from "high-authority" sources, such as major newspapers and magazines, appears more desirable to AI companies than blog and social-media posts. (Ziff Davis is suing OpenAI for training on its articles without paying a licensing fee.) Researchers at Microsoft have also written publicly about "the importance of high-quality data" and have suggested that textbook-style content may be particularly desirable. But beyond a few specific studies like these, there is little insight into what kind of content most improves an LLM, leaving a lot of unanswered questions. Are biographies more or less important than histories? Does high-quality fiction matter? Are old books worth anything? Amy Brand, the director and publisher of the MIT Press, told me that "a solution that promises to help determine the fair value of specific human-authored content within the active marketplace for LLM training data would be hugely beneficial." A publisher's negotiating power is also limited by the degree to which it can stop an AI company from using its work without consent. There's no surefire way to keep AI companies from scraping news websites; even the Robots Exclusion Protocol, the standard opt-out method available to news publishers, is easily circumvented. Because AI companies generally keep their training data a secret, and because there is no easy way for publishers to check which chatbots are summarizing their articles, publishers have difficulty figuring out which AI companies they might sue or try to strike a deal with. Some experts, such as Tim O'Reilly, have suggested that laws should require the disclosure of copyrighted training data, but no existing legislation requires companies to reveal specific authors or publishers that have been used for AI training material. Of course, all of this raises a question. AI companies seem to have taken publishers' content already. Why would they pay for it now, especially because some of these companies have argued in court that training LLMs on copyrighted books and articles is fair use? Perhaps the deals are simply hedges against an unfavorable ruling in court. If AI companies are prevented from training on copyrighted work for free, then organizations that have existing deals with publishers might be ahead of their competition. Publisher deals are also a means of settling without litigation -- which may be a more desirable path for publishers who are risk-averse or otherwise uncertain. But the legal scholar James Grimmelmann told me that AI companies could also respond to complaints like Ziff Davis's by arguing that the deals involve more than training on a publisher's content: They may also include access to cleaner versions of articles, ongoing access to a daily or real-time feed, or a release from liability for their chatbot's plagiarism. Tech companies could argue that the money exchanged in these deals is exclusively for the nonlicensing elements, so they aren't paying for training material. It's worth noting that tech companies almost always refer to these deals as partnerships, not licensing deals, likely for this reason. Regardless, the modest income from these arrangements is not going to save publishers: Even a good deal, one publisher told me, won't come anywhere near recouping the revenue lost from decreased readership. Publishers that can figure out how to survive the generative-AI assault may need to invent different business models and find new streams of revenue. There may be viable strategies, but none of the publishers I spoke with has a clear idea of what they are. Publishers have become accustomed to technological threats over the past two decades, perhaps most notably the loss of ad revenue to Facebook and Google, a company that was recently found to have an illegal monopoly in online advertising (though the company has said it will appeal the ruling). But the rise of generative AI may spell doom for the Fourth Estate: With AI, the tech industry even deprives publishers of an audience. In the event of publisher mass extinction, some journalists will be able to endure. The so-called creator economy shows that it's possible to provide high-quality news and information through Substack, YouTube, and even TikTok. But not all reporters can simply move to these platforms. Investigative journalism that exposes corruption and malfeasance by powerful people and companies comes with a serious risk of legal repercussions, and requires resources -- such as time and money -- that tend to be in short supply for freelancers. If news publishers start going out of business, won't AI companies suffer too? Their chatbots need access to journalism to answer questions about the world. Doesn't the tech industry have an interest in the survival of newspapers and magazines? In fact, there are signs that AI companies believe publishers are no longer needed. In December, at The New York Times' DealBook Summit, OpenAI CEO Sam Altman was asked how writers should feel about their work being used for AI training. "I think we do need a new deal, standard, protocol, whatever you want to call it, for how creators are going to get rewarded." He described an "opt-in" regime where an author could receive "micropayments" when their name, likeness, and style were used. But this could not be further from OpenAI's current practice, in which products are already being used to imitate the styles of artists and writers, without compensation or even an effective opt-out. Google CEO Sundar Pichai was also asked about writer compensation at the DealBook Summit. He suggested that a market solution would emerge, possibly one that wouldn't involve publishers in the long run. This is typical. As in other industries they've "disrupted," Silicon Valley moguls seem to perceive old, established institutions as middlemen to be removed for greater efficiency. Uber enticed drivers to work for it, crushed the traditional taxi industry, and now controls salaries, benefits, and workloads algorithmically. This has meant greater convenience for consumers, just as AI arguably does -- but it has also proved ruinous for many people who were once able to earn a living wage from professional driving. Pichai seemed to envision a future that may have a similar consequence for journalists. "There'll be a marketplace in the future, I think -- there'll be creators who will create for AI," he said. "People will figure it out."
[6]
Is AI rewiring our minds? Scientists probe cognitive cost of chatbots
A mannequin wears an EEG cap used to monitor the brain activity of subjects in a study tracking the cognitive cost of using ChatGPT at the MIT Media Lab in Cambridge, Massachusetts. (Sophie Park/For The Washington Post) In our daily lives, the use of artificial intelligence programs such as ChatGPT is obvious. Students employ them to churn out term papers. Office workers ask them to organize calendars and help write reports. Parents prompt them to create personalized bedtime stories for toddlers. Inside our brains, how the persistent use of AI molds the mind remains unclear. As our reliance on having vast knowledge rapidly synthesized at our fingertips increases, scientists are racing to understand how the frequent use of large language model programs, or LLMs, is affecting our brains -- probing worries that they weaken cognitive abilities and stifle the diversity of our ideas. Headlines proclaiming that AI is making us stupid and lazy went viral this month after the release of a study from MIT Media Lab. Though researchers caution that this study and others across the field have not drawn hard conclusions on whether AI is reshaping our brains in pernicious ways, the MIT work and other small studies published this year offer unsettling suggestions. One U.K. survey study of more than 600 people published in January found "significant negative correlation between the frequent use of AI tools and critical thinking abilities," as younger users in particular often relied on the programs as substitutes, not supplements, for routine tasks. The University of Pennsylvania's Wharton School published a study last week showed that high school students in Turkey with access to a ChatGPT-style tutor performed significantly better solving practice math problems. But when the program was taken away, they performed worse than students who had used no AI tutor. And the MIT study that garnered massive attention -- and some backlash -- involved researchers who measured brain activity of mostly college students as they used ChatGPT to write SAT-style essays during three sessions. Their work was compared to others who used Google or nothing at all. Researchers outfitted 54 essay writers with caps covered in electrodes that monitor electrical signals in the brain. The EEG data revealed writers who used ChatGPT exhibited the lowest brain engagement and "consistently underperformed at neural, linguistic, and behavioral levels," according to the study. Ultimately, they delivered essays that sounded alike and lacked personal flourishes. English teachers who read the papers called them "soulless." The "brain-only" group showed the greatest neural activations and connections between regions of the brain that "correlated with stronger memory, greater semantic accuracy, and firmer ownership of written work." In a fourth session, members from the ChatGPT group were asked to rewrite one of their previous essays without the tool but participants remembered little of their previous work. Skeptics point to myriad limitations. They argue neural connectivity measured by EEG doesn't necessarily indicate poor cognition or brain health. For the study participants, the stakes were also low -- entrance to college, for example, didn't depend on completing the essays. Also, only 18 participants returned for the fourth and final session. Lead MIT researcher Nataliya Kosmyna acknowledges that the study was limited in scope and, contrary to viral internet headlines about the paper, was not gauging whether ChatGPT is making us dumb. The paper has not been peer reviewed but her team released preliminary findings to spark conversation about the impact of ChatGPT, particularly on developing brains, and the risks of the Silicon Valley ethos of rolling out powerful technology quickly. "Maybe we should not apply this culture blindly in the spaces where the brain is fragile," Kosmyna said in an interview. OpenAI, the California company that released ChatGPT in 2022, did not respond to requests for comment. (The Washington Post has a content partnership with OpenAI.) Michael Gerlich, who spearheaded the U.K. survey study, called the MIT approach "brilliant" and said it showed that AI is supercharging what is known as "cognitive off-loading," where we use a physical action to reduce demands on our brain. But instead of off-loading simple data -- like phone numbers we once memorized but now store in our phones -- people relying on LLMs off-load the critical thinking process. His study suggested younger people and those with less education are quicker to off-load critical thinking to LLMs because they are less confident in their skills. ("It's become a part of how I think," one student later told researchers.) "It's a large language model. You think it's smarter than you. And you adopt that," said Gerlich, a professor at SBS Swiss Business School in Zurich. Still, Kosmyna, Gerlich and other researchers warn against drawing sweeping conclusions -- no long-term studies have been completed on the effects on cognition of the nascent technology. Researchers also stress that the benefits of AI may ultimately outweigh risks, freeing our minds to tackle bigger and bolder thinking. Deep-rooted fears and avenues for creativity Fear of technology rewiring our brains is nothing new. Socrates warned writing would make humans forgetful. In the mid-1970s, teachers fretted that cheap calculators might strip students of their abilities to do simple math. More recently, the rise of search engines spurred fears of "digital amnesia." "It wasn't that long ago that we were all panicking that Google is making us stupid and now that Google is more part of our everyday lives, it doesn't feel so scary," said Samuel J. Gilbert, professor of cognitive neuroscience at University College London. "ChatGPT is the new target for some of the concerns. We need to be very careful and balanced in the way that we interpret these findings" of the MIT study. The MIT paper suggests that ChatGPT essay writers illustrate "cognitive debt," a condition in which relying on such programs replaces the effortful cognitive processes needed for independent thinking. Essays become biased and superficial. In the long run, such cognitive debt might make us easier to manipulate and stifle creativity. But Gilbert argues that the MIT study of essay writers could also be viewed as an example of what he calls "cognitive spillover," or discarding some information to clear mental bandwidth for potentially more ambitious thoughts. "Just because people paid less mental effort to writing the essays that the experimenters asked them to do, that's not necessarily a bad thing," he said. "Maybe they had more useful, more valuable things they could do with their minds." Experts suggest that perhaps AI, in the long run and deployed right, will prove to augment, not replace critical thinking. The Wharton School study on nearly 1,000 Turkish high school students also included a group that has access to a ChatGPT-style tutor program with built-in safeguards that provided teacher-designed hints instead of giving away answers. Those students performed extremely well and did roughly the same as students who did not use AI when they were asked to solve problems unassisted, the study showed. More research is needed into the best ways to shape user behaviors and create LLM programs to avoid damaging critical thinking skills, said Aniket Kittur, professor at Carnegie Mellon University's Human-Computer Interaction Institute. He is part of a team creating AI programs designed to light creative sparks, not churn out finished but bland outputs. One program, dubbed BioSpark, aims to help users solve problems through inspiration in the natural world -- say, for example, creating a better bike rack to mount on cars. Instead of a bland text interface, the program might display images and details of different animal species to serve as inspiration, such as the shape of frog legs or the stickiness of snail mucous that could mirror a gel to keeps bicycles secure. Users can cycle through relevant scientific research, saving ideas a la Pinterest, then asking more detailed questions of the AI program. "We need both new ways of interacting with these tools that unlocks this kind of creativity," Kittur said. "And then we need rigorous ways of measuring how successful those tools are. That's something that you can only do with research." Research into how AI programs can augment human creativity is expanding dramatically but doesn't receive as much attention because of the technology-wary zeitgeist of the public, said Sarah Rose Siskind, a New York-based science and comedy writer who consults with AI companies. Siskind believes the public needs better education on how to use and think about AI -- she created a video on how she uses AI to expand her joke repertoire and reach newer audiences. She said she also has a forthcoming research paper exploring ChatGPT's usefulness in comedy. "I can use AI to understand my audience with more empathy and expertise than ever before," Siskind said. "So there are all these new frontiers of creativity. That really should be emphasized."
[7]
'Writing is thinking': do students who use ChatGPT learn less?
Paris (AFP) - When Jocelyn Leitzinger had her university students write about times in their lives they had witnessed discrimination, she noticed that a woman named Sally was the victim in many of the stories. "It was very clear that ChatGPT had decided this is a common woman's name," said Leitzinger, who teaches an undergraduate class on business and society at the University of Illinois in Chicago. "They weren't even coming up with their own anecdotal stories about their own lives," she told AFP. Leitzinger estimated that around half of her 180 students used ChatGPT inappropriately at some point last semester -- including when writing about the ethics of artificial intelligence (AI), which she called both "ironic" and "mind-boggling". So she was not surprised by recent research which suggested that students who use ChatGPT to write essays engage in less critical thinking. The preprint study, which has not been peer-reviewed, was shared widely online and clearly struck a chord with some frustrated educators. The team of MIT researchers behind the paper have received more than 3,000 emails from teachers of all stripes since it was published online last month, lead author Nataliya Kosmyna told AFP. - 'Soulless' AI essays - For the small study, 54 adult students from the greater Boston area were split into three groups. One group used ChatGPT to write 20-minute essays, one used a search engine, and the final group had to make do with only their brains. The researchers used EEG devices to measure the brain activity of the students, and two teachers marked the essays. The ChatGPT users scored significantly worse than the brain-only group on all levels. The EEG showed that different areas of their brains connected to each other less often. And more than 80 percent of the ChatGPT group could not quote anything from the essay they had just written, compared to around 10 percent of the other two groups. By the third session, the ChatGPT group appeared to be mostly focused on copying and pasting. The teachers said they could easily spot the "soulless" ChatGPT essays because they had good grammar and structure but lacked creativity, personality and insight. However Kosmyna pushed back against media reports claiming the paper showed that using ChatGPT made people lazier or more stupid. She pointed to the fourth session, when the brain-only group used ChatGPT to write their essay and displayed even higher levels of neural connectivity. Kosmyna emphasised it was too early to draw conclusions from the study's small sample size but called for more research into how AI tools could be used more carefully to help learning. Ashley Juavinett, a neuroscientist at the University of California San Diego who was not involved in the research, criticised some "offbase" headlines that wrongly extrapolated from the preprint. "This paper does not contain enough evidence nor the methodological rigour to make any claims about the neural impact of using LLMs (large language models such as ChatGPT) on our brains," she told AFP. Thinking outside the bot Leitzinger said the research reflected how she had seen student essays change since ChatGPT was released in 2022, as both spelling errors and authentic insight became less common. Sometimes students do not even change the font when they copy and paste from ChatGPT, she said. But Leitzinger called for empathy for students, saying they can get confused when the use of AI is being encouraged by universities in some classes but is banned in others. The usefulness of new AI tools is sometimes compared to the introduction of calculators, which required educators to change their ways. But Leitzinger worried that students do not need to know anything about a subject before pasting their essay question into ChatGPT, skipping several important steps in the process of learning. A student at a British university in his early 20s who wanted to remain anonymous told AFP he found ChatGPT was a useful tool for compiling lecture notes, searching the internet and generating ideas. "I think that using ChatGPT to write your work for you is not right because it's not what you're supposed to be at university for," he said. The problem goes beyond high school and university students. Academic journals are struggling to cope with a massive influx of AI-generated scientific papers. Book publishing is also not immune, with one startup planning to pump out 8,000 AI-written books a year. "Writing is thinking, thinking is writing, and when we eliminate that process, what does that mean for thinking?" Leitzinger asked.
Share
Copy Link
Recent studies reveal significant differences in brain activity and writing quality when using AI tools like ChatGPT, raising questions about learning, creativity, and the future of education and journalism.
A recent study conducted by researchers at the Massachusetts Institute of Technology (MIT) has shed light on the cognitive effects of using AI tools like ChatGPT for writing tasks. The study, led by Nataliya Kosmyna, involved 54 participants split into three groups: one using ChatGPT, another using a search engine, and a third relying solely on their own knowledge 1.
The findings revealed significant differences in brain activity and memory retention among the groups. Participants using ChatGPT showed:
Notably, over 80% of ChatGPT users couldn't quote from their own essays immediately after writing them, compared to only 10% in the other groups 2.
Source: CNET
Beyond brain activity, the study uncovered a concerning trend towards homogenization of content when using AI tools. Essays produced with ChatGPT tended to converge on common words and ideas, lacking the diversity of perspectives found in essays written without AI assistance 3.
This homogenization effect was further corroborated by a separate study from Cornell University, which found that AI-assisted writing from different cultural backgrounds became more similar and aligned with Western norms 3.
Source: The New Yorker
The widespread adoption of AI tools in educational settings has raised concerns among educators. Jocelyn Leitzinger, a professor at the University of Illinois in Chicago, estimated that around half of her 180 students used ChatGPT inappropriately last semester 2.
Educators worry that relying heavily on AI tools may lead to:
Some students, like Alex from New York University, admit to extensive use of AI for various writing tasks, including academic papers 4.
The rise of AI-powered tools is not only affecting education but also posing significant challenges to the news and publishing industries. Key concerns include:
Some publishers report traffic reductions of up to 34% due to AI-generated summaries appearing in search results 5.
Source: The Atlantic
While these studies provide valuable insights, researchers caution against drawing definitive long-term conclusions. More extensive research is needed to fully understand the impact of AI tools on cognition, learning, and content creation.
As AI technology continues to evolve, educators, publishers, and policymakers face the challenge of adapting to this new landscape while preserving critical thinking skills, creativity, and the value of human-generated content.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
11 hrs ago
9 Sources
Technology
11 hrs ago
Google's Made by Google 2025 event showcases the Pixel 10 series, featuring advanced AI capabilities, improved hardware, and ecosystem integrations. The launch includes new smartphones, wearables, and AI-driven features, positioning Google as a strong competitor in the premium device market.
4 Sources
Technology
11 hrs ago
4 Sources
Technology
11 hrs ago
Palo Alto Networks reports impressive Q4 results and forecasts robust growth for fiscal 2026, driven by AI-powered cybersecurity solutions and the strategic acquisition of CyberArk.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
19 hrs ago
6 Sources
Technology
19 hrs ago
President Trump's plan to deregulate AI development in the US faces a significant challenge from the European Union's comprehensive AI regulations, which could influence global standards and affect American tech companies' operations worldwide.
2 Sources
Policy
3 hrs ago
2 Sources
Policy
3 hrs ago