3 Sources
3 Sources
[1]
When two years of academic work vanished with a single click
Within a couple of years of ChatGPT coming out, I had come to rely on the artificial-intelligence tool, for my work as a professor of plant sciences at the University of Cologne in Germany. Having signed up for OpenAI's subscription plan, ChatGPT Plus, I used it as an assistant every day -- to write e-mails, draft course descriptions, structure grant applications, revise publications, prepare lectures, create exams and analyse student responses, and even as an interactive tool as part of my teaching. It was fast and flexible, and I found it reliable in a specific sense: it was always available, remembered the context of ongoing conversations and allowed me to retrieve and refine previous drafts. I was well aware that large language models such as those that power ChatGPT can produce seemingly confident but sometimes incorrect statements, so I never equated its reliability with factual accuracy, but instead relied on the continuity and apparent stability of the workspace. But in August, I temporarily disabled the 'data consent' option because I wanted to see whether I would still have access to all of the model's functions if I did not provide OpenAI with my data. At that moment, all of my chats were permanently deleted and the project folders were emptied -- two years of carefully structured academic work disappeared. No warning appeared. There was no undo option. Just a blank page. Fortunately, I had saved partial copies of some conversations and materials, but large parts of my work were lost forever. At first, I thought it was a mistake. I tried different browsers, devices and networks. I cleared the cache, reinstalled the app and even changed the settings back and forth. Nothing helped. When I contacted OpenAI's support, the first responses came from an AI agent. Only after repeated enquiries did a human employee respond, but the answer remained the same: the data were permanently lost and could not be recovered. Accountability gap This was not a case of losing random notes or idle chats. Among my discussions with ChatGPT were project folders containing multiple conversations that I had used to develop grant applications, prepare teaching materials, refine publication drafts and design exam analyses. This was intellectual scaffolding that had been built up over a two-year period. We are increasingly being encouraged to integrate generative AI into research and teaching. Individuals use it for writing, planning and teaching; universities are experimenting with embedding it into curricula. However, my case reveals a fundamental weakness: these tools were not developed with academic standards of reliability and accountability in mind. If a single click can irrevocably delete years of work, ChatGPT cannot, in my opinion and on the basis of my experience, be considered completely safe for professional use. As a paying subscriber (€20 per month, or US$23), I assumed basic protective measures would be in place, including a warning about irreversible deletion, a recovery option, albeit time-limited, and backups or redundancy. OpenAI, in its responses to me, referred to 'privacy by design' -- which means that everything is deleted without a trace when users deactivate data sharing. The company was clear: once deleted, chats cannot be recovered, and there is no redundancy or backup that would allow such a thing (see 'No going back'). Ultimately, OpenAI fulfilled what they saw as a commitment to my privacy as a user by deleting my information the second I asked them to.
[2]
Professor Reports That OpenAI Deleted His Work, World Laughs in His Face
We've all been there. We lost some part of our digital life, perhaps because we accidentally deleted it or a system failed us in some way. Well, a professor in Germany lost a large amount of work recently after changing his settings with OpenAI's ChatGPT, writing about it this week in Nature. But social media users don't seem very sympathetic. In fact, they're now repeatedly dunking on him for using AI in the first place. Marcel Bucher, a professor of plant sciences at the University of Cologne, writes that he signed up for a paid ChatGPT plan two years ago and found the AI tool tremendously useful. "Having signed up for OpenAI's subscription plan, ChatGPT Plus, I used it as an assistant every day -- to write e-mails, draft course descriptions, structure grant applications, revise publications, prepare lectures, create exams and analyse student responses, and even as an interactive tool as part of my teaching," wrote Bucher. He acknowledged that ChatGPT, like all large language models, could be inaccurate, but liked it because it could remember the context of conversations, and he valued the "continuity and apparent stability of the workspace." Then he tinkered with the settings for data consent. From Nature: But in August, I temporarily disabled the 'data consent' option because I wanted to see whether I would still have access to all of the model's functions if I did not provide OpenAI with my data. At that moment, all of my chats were permanently deleted and the project folders were emptied -- two years of carefully structured academic work disappeared. No warning appeared. There was no undo option. Just a blank page. Fortunately, I had saved partial copies of some conversations and materials, but large parts of my work were lost forever. Bucher went on to explain that he initially thought it was a mistake and assumed that he would be able to recover his years of data. He reinstalled the app, tried different browsers, and tinkered with more settings. But nothing worked. He then tried to contact OpenAI but was predictably met with an AI agent, which couldn't help him. He eventually was able to contact a human, but they couldn't help him either. The data was gone. Again, this is the kind of story that would've likely elicited some sympathy in another era. But here in 2026, when AI is often seen as a slop machine for generating wrong answers and child sexual abuse material, there are more than a few people who will revel in someone losing all their AI chats. "Amazing sob story: 'ChatGPT deleted all the work I hadn't done'," one Bluesky user wrote. "Maybe next time, actually do the work you are paid to do *yourself*, instead of outsourcing it to the climate-killing, suicide-encouraging plagiarism machine," wrote another another. Others floated the possibility that the essay in Nature wasn't even written by Bucher. "This is the dumbest shit I've read in a quite a while," a Bluesky user wrote. "(But, in his defense: there is no particular reason to assume that the guy who published this actually wrote it himself.)" Bucher did make the point that he was being encouraged to use AI in his work, and there's validity to that complaint. Large institutions are telling their workers to incorporate AI more often under the theory that it's some kind of inevitable future: We are increasingly being encouraged to integrate generative AI into research and teaching. Individuals use it for writing, planning and teaching; universities are experimenting with embedding it into curricula. However, my case reveals a fundamental weakness: these tools were not developed with academic standards of reliability and accountability in mind. If a single click can irrevocably delete years of work, ChatGPT cannot, in my opinion and on the basis of my experience, be considered completely safe for professional use. It remains to be seen whether generative AI will truly transform the workplace in ways that actually matter, especially as workers are more skeptical and bosses try to insist on its use. Whatever happens, there will likely be plenty of AI skeptics around to celebrate when someone loses a bunch of work.
[3]
Scientist Horrified as ChatGPT Deletes All His Research
ChatGPT may be an excellent tool in case your strongly-worded email to your landlord about that ceiling leak needs a second pair of eyes. It also excels at coming up with a rough first draft for non-mission-critical writing, allowing you to carefully pick it apart and refine it. But like all of its competitors, ChatGPT is plagued by plenty of well-documented shortcomings as well, from rampant hallucinations to a sycophantic tone that can easily lull users into gravely mistaken beliefs. In other words, it's not exactly a tool anybody should rely on to get important work done -- and that's a lesson University of Cologne professor of plant sciences Marcel Bucher learned the hard way. In a column for Nature, Bucher admitted he'd "lost" two years' worth of "carefully structured academic work" -- including grant applications, publication revisions, lectures, and exams -- after turning off ChatGPT's "data consent" option. He disabled the feature because he "wanted to see whether I would still have access to all of the model's functions if I did not provide OpenAI with my data." But to his dismay, the chats disappeared without a trace in an instant. "No warning appeared," Bucher wrote. "There was no undo option. Just a blank page." The column was met with an outpouring of schadenfreude on social media, with users questioning how Bucher had gone two years without making any local backups. Others were enraged, calling on the university to fire him for relying so heavily on AI for academic work. Some, however, did take pity. "Well, kudos to Marcel Bucher for sharing a story about a deeply flawed workflow and a stupid mistake," Heidelberg University teaching coordinator Roland Gromes wrote in a post on Bluesky. "A lot of academics believe they can see the pitfalls but all of us can be naive and run into this kind of problems!" Bucher is the first to admit that ChatGPT can "produce seemingly confident but sometimes incorrect statements," arguing that he never "equated its reliability with factual accuracy." Nonetheless, he "relied on the continuity and apparent stability of the workspace," using ChatGPT Plus as his "assistant every day." The use of generative AI in the scientific world has proven highly controversial. Scientific journals are being flooded with poorly sourced AI slop, turning the process of peer review into a horror show, as The Atlantic reported this week. Entire fraudulent scientific journals are popping up to capitalize on others who are trying to get their AI slop published. The result? AI slop being peer-reviewed by AI models, further entrenching the polluting of scientific literature. For their part, scientists are constantly being informed of how their work is being cited in various new papers -- only to find that the referenced material was entirely hallucinated. To be clear, there's zero evidence that Bucher was in any way trying to sell off AI slop to his students or get dubious, AI-generated research published. Nonetheless, his unfortunate experience with the platform should serve as a warning sign to others. In his column, Bucher accused OpenAI of selling subscriptions to its ChatGPT Plus despite not assuring "basic protective measures" to stop years of his work from vanishing in an instant. In a statement to Nature, OpenAI clarified that chats "cannot be recovered" after being deleted, and challenged Bucher's claim that there was "no warning," saying that "we do provide a confirmation prompt before a user permanently deletes a chat." The company also helpfully recommended that "users maintain personal backups for professional work." More on scientific AI slop: The More Scientists Work With AI, the Less They Trust It
Share
Share
Copy Link
Marcel Bucher, a plant sciences professor at the University of Cologne, lost two years of carefully structured academic work when ChatGPT permanently deleted all his chats after he disabled the data consent option. OpenAI stated the deletion was part of their 'privacy by design' policy, with no recovery option available. The incident raises questions about AI tools reliability for professional use.
Marcel Bucher, a professor of plant sciences at the University of Cologne, experienced a catastrophic data loss when ChatGPT permanently erased two years of his academic work without warning. The incident occurred in August when Bucher temporarily disabled the data consent option to test whether he would retain access to all features without sharing his data with OpenAI
1
. At that moment, all his chats were permanently deleted and project folders emptied, leaving only a blank page. The professor had been a ChatGPT Plus subscriber, paying €20 per month (approximately $23), and relied on the platform daily for writing emails, drafting course descriptions, structuring grant applications, revising publications, preparing lectures, creating exams, and analyzing student responses1
.
Source: Nature
After discovering the permanent deletion, Bucher attempted multiple recovery methods, trying different browsers, devices, and networks, clearing cache, and reinstalling the app. When he contacted OpenAI support, he initially received responses from an AI agent before finally reaching a human employee who confirmed the data were permanently lost and could not be recovered
1
. OpenAI explained their decision as "privacy by design," meaning everything is deleted without trace when users deactivate data sharing. In a statement to Nature, OpenAI challenged Bucher's claim that there was no warning, stating they "do provide a confirmation prompt before a user permanently deletes a chat"3
. The company recommended that users maintain personal backups for professional work.
Source: Futurism
The professor's account of losing academic work triggered widespread criticism on social media rather than sympathy. Users questioned why Bucher hadn't made local backups over two years and criticized his heavy reliance on generative AI for academic use
2
. One Bluesky user wrote, "Amazing sob story: 'ChatGPT deleted all the work I hadn't done,'" while another suggested, "Maybe next time, actually do the work you are paid to do yourself, instead of outsourcing it to the climate-killing, suicide-encouraging plagiarism machine"2
. Some users even speculated that Bucher's essay in Nature wasn't written by him. However, Heidelberg University teaching coordinator Roland Gromes offered a more measured response, stating, "Well, kudos to Marcel Bucher for sharing a story about a deeply flawed workflow and a stupid mistake"3
.Related Stories
Bucher's experience exposes a fundamental accountability gap in how AI platforms handle professional data. As institutions increasingly encourage integration of generative AI into research and teaching, the incident reveals these tools were not developed with academic standards of reliability in mind
1
. The professor acknowledged he was aware that large language models can produce seemingly confident but sometimes incorrect statements, noting he never equated reliability with factual accuracy but instead relied on "the continuity and apparent stability of the workspace"1
. As a paying subscriber, he assumed basic protective measures would be in place, including warnings about irreversible deletion, time-limited recovery options, and backups or redundancy.
Source: Gizmodo
The use of generative AI in academia remains highly controversial, with scientific journals being flooded with poorly sourced AI slop and entire fraudulent journals emerging to capitalize on AI-generated content
3
. Scientists are constantly finding their work cited in papers where the referenced material was entirely hallucinated citations3
. While there's no evidence Bucher was attempting to sell AI slop to students or publish dubious research, his unfortunate experience should serve as a warning. Bucher defended his approach by noting that individuals and universities are being encouraged to use AI for writing, planning, and teaching, with institutions experimenting with embedding it into curricula2
. The incident raises critical questions about whether ChatGPT can be considered completely safe for professional use if a single click can irrevocably delete years of work, particularly when users are paying premium subscription fees expecting enterprise-level data protection.Summarized by
Navi
26 Jul 2025•Technology

11 Jun 2025•Technology

25 Jun 2025•Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
