Professor loses two years of academic work after ChatGPT deletes all chats without warning

Reviewed byNidhi Govil

3 Sources

Share

Marcel Bucher, a plant sciences professor at the University of Cologne, lost two years of carefully structured academic work when ChatGPT permanently deleted all his chats after he disabled the data consent option. OpenAI stated the deletion was part of their 'privacy by design' policy, with no recovery option available. The incident raises questions about AI tools reliability for professional use.

Professor's Two-Year Academic Archive Vanishes in Instant

Marcel Bucher, a professor of plant sciences at the University of Cologne, experienced a catastrophic data loss when ChatGPT permanently erased two years of his academic work without warning. The incident occurred in August when Bucher temporarily disabled the data consent option to test whether he would retain access to all features without sharing his data with OpenAI

1

. At that moment, all his chats were permanently deleted and project folders emptied, leaving only a blank page. The professor had been a ChatGPT Plus subscriber, paying €20 per month (approximately $23), and relied on the platform daily for writing emails, drafting course descriptions, structuring grant applications, revising publications, preparing lectures, creating exams, and analyzing student responses

1

.

Source: Nature

Source: Nature

No Recovery Option Despite Premium Subscription

After discovering the permanent deletion, Bucher attempted multiple recovery methods, trying different browsers, devices, and networks, clearing cache, and reinstalling the app. When he contacted OpenAI support, he initially received responses from an AI agent before finally reaching a human employee who confirmed the data were permanently lost and could not be recovered

1

. OpenAI explained their decision as "privacy by design," meaning everything is deleted without trace when users deactivate data sharing. In a statement to Nature, OpenAI challenged Bucher's claim that there was no warning, stating they "do provide a confirmation prompt before a user permanently deletes a chat"

3

. The company recommended that users maintain personal backups for professional work.

Source: Futurism

Source: Futurism

Social Media Backlash Highlights AI Skepticism

The professor's account of losing academic work triggered widespread criticism on social media rather than sympathy. Users questioned why Bucher hadn't made local backups over two years and criticized his heavy reliance on generative AI for academic use

2

. One Bluesky user wrote, "Amazing sob story: 'ChatGPT deleted all the work I hadn't done,'" while another suggested, "Maybe next time, actually do the work you are paid to do yourself, instead of outsourcing it to the climate-killing, suicide-encouraging plagiarism machine"

2

. Some users even speculated that Bucher's essay in Nature wasn't written by him. However, Heidelberg University teaching coordinator Roland Gromes offered a more measured response, stating, "Well, kudos to Marcel Bucher for sharing a story about a deeply flawed workflow and a stupid mistake"

3

.

Accountability Gap in AI Tools Reliability

Bucher's experience exposes a fundamental accountability gap in how AI platforms handle professional data. As institutions increasingly encourage integration of generative AI into research and teaching, the incident reveals these tools were not developed with academic standards of reliability in mind

1

. The professor acknowledged he was aware that large language models can produce seemingly confident but sometimes incorrect statements, noting he never equated reliability with factual accuracy but instead relied on "the continuity and apparent stability of the workspace"

1

. As a paying subscriber, he assumed basic protective measures would be in place, including warnings about irreversible deletion, time-limited recovery options, and backups or redundancy.

Source: Gizmodo

Source: Gizmodo

Broader Implications for Academic AI Adoption

The use of generative AI in academia remains highly controversial, with scientific journals being flooded with poorly sourced AI slop and entire fraudulent journals emerging to capitalize on AI-generated content

3

. Scientists are constantly finding their work cited in papers where the referenced material was entirely hallucinated citations

3

. While there's no evidence Bucher was attempting to sell AI slop to students or publish dubious research, his unfortunate experience should serve as a warning. Bucher defended his approach by noting that individuals and universities are being encouraged to use AI for writing, planning, and teaching, with institutions experimenting with embedding it into curricula

2

. The incident raises critical questions about whether ChatGPT can be considered completely safe for professional use if a single click can irrevocably delete years of work, particularly when users are paying premium subscription fees expecting enterprise-level data protection.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo