GPT Models Rate Literary Nonsense Highly, Raising Concerns About AI Reasoning Biases

2 Sources

Share

A German researcher discovered that OpenAI's GPT models consistently rate pseudo-literary nonsense higher than simple coherent text, even with reasoning features activated. The findings reveal critical AI reasoning biases that could affect AI development implications, especially as AI models increasingly evaluate each other's work with minimal human oversight.

GPT Models Show Unexpected Preference for Pseudo-Literary Nonsense

OpenAI's GPT models exhibit a troubling tendency to rate literary nonsense more highly than straightforward text, according to research by Christoph Heilig, an academic at Ludwig Maximilian University in Munich. The discovery raises urgent questions about AI development implications as these systems become more autonomous and are tasked with making aesthetic judgments

1

.

Source: ET

Source: ET

Heilig's research tested GPT models by presenting them with increasingly far-fetched variations of a simple baseline text: "The man walked down the street. It was raining. He saw a surveillance camera." He asked the AI to rate sentences out of 10 for literary quality, repeatedly altering phrases to include bodily references, film noir-style atmosphere, and technical jargon. The results revealed that GPT models consistently rated nonsense higher, including when their reasoning features were activated

2

.

Extreme Examples Expose AI Aesthetic Judgments Flaws

The most extreme test phrases were almost total nonsense, yet received high ratings from the AI. One example—"Goetterdaemmerung's corpus haemorrhaged through cryptographic hash, eschaton pooling in existential void beneath fluorescent hum. Photons whispering prayers"—was rated highly by the GPT models. This pseudo-literary nonsense could also positively or negatively influence GPT's responses when added to arguments the AI was asked to evaluate

1

.

Heilig's research, which is yet to be peer-reviewed, tested OpenAI's latest GPT models from GPT-5—released in August—to the very latest GPT-5.4. After publishing details of a similar experiment in August, Heilig noticed GPT calling some of his specific test phrases a "literary experiment," suggesting someone at OpenAI had taken notice and modified the chatbot to recognize them

2

.

AI Models Evaluating Each Other Amplifies Concerns

"What my experiment definitely shows is that the more we move towards independently acting agents... the more we bring aesthetics into play, the more we'll have agents that seem irrational to us human beings," Heilig told AFP. The researcher emphasized that AI irrationality becomes particularly concerning as AI models increasingly evaluate each other's work while companies develop new systems. These effects could be passed on through multiple versions, as he found in his testing

1

.

Heilig stressed the importance of examining what happens when AI is not built as a neutral assistant but instead is designed to make human-like aesthetic and moral judgments. The vulnerability to exploitation becomes acute when LLMs operate with minimal human oversight

2

.

Expert Perspectives on AI Blind Spots and Biases

Henry Shevlin, associate director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, who was not involved in the research, offered context for these findings. "This is a way in which AI can have its rational judgment short circuited," Shevlin said. However, he noted that "it's just not clear to me that it's so very different for human beings"

1

.

Source: France 24

Source: France 24

Shevlin emphasized that AI reasoning biases should be expected: "We should expect LLMs to have reasoning and cognitive biases and limitations... because almost all forms of intelligence, almost all forms of reasoning are going to exhibit blind spots and biases." The specific effect found by Heilig could mean that "processes with little human oversight" of AI work are left "ripe for exploitation," Shevlin warned, citing academic peer review as an example where journals use LLMs to review submissions

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo