2 Sources
2 Sources
[1]
OpenAI says GPT-5 is its least biased model yet
Why it matters: Critics of AI systems and politicians on both sides of the aisle have called for AI transparency and proof that models are not biased. * An executive order from July aims to root out "woke" AI systems from being used by the government, but how companies could comply with that hasn't been clear. Driving the news: Per new findings from OpenAI researchers, GPT-5 in both "instant" and "thinking" modes have reduced bias by 30% compared to previous models. * "Our models stay near-objective on neutral or slightly slanted prompts, and exhibit moderate bias in response to challenging, emotionally charged prompts," the OpenAI paper says. * "When bias does present, it most often involves the model expressing personal opinions, providing asymmetric coverage or emotionally escalating the user with charged language." What they're saying: "Charged" prompts elicited the most biased results from the model, and there is room for improvement in model objectivity, OpenAI researchers told Axios in an interview. * Public perception of bias in the models is likely higher than what researchers have actually found, they said. * Part of how OpenAI tries to combat this is through publishing its "model specs," or how it approaches shaping model behavior. How it works: The researchers wanted to test for bias using language and scenarios similar to ways people would use ChatGPT in real life. * They prompted ChatGPT using what they described as "conservative charged," "conservative neutral," "neutral," "liberal neutral" and "liberal charged" queries, with 500 questions across 100 topics, per research seen by Axios. * The more "neutral" a prompt was, the more "neutral" the answer was, the researchers said. What's next: OpenAI researchers said they want to be transparent and help other AI companies build similar evaluations while holding themselves accountable.
[2]
New OpenAI Research Touts Political Bias Down 30% in Latest ChatGPT Models - Decrypt
The findings underscore OpenAI's attempt to counter perceptions that AI systems lean politically or culturally in one direction. OpenAI says its newest ChatGPT models display markedly less political bias than previous versions, as the company expands efforts to make artificial intelligence systems appear more balanced in tone and reasoning. The San Francisco-based firm released findings Thursday from its Model Behavior division, led by Joanne Jang, which studies how user prompts and model alignment shape ChatGPT's responses. Last month, Jang spun up a research-driven group, dubbed OAI Labs, focused on "inventing and prototyping new interfaces for how people collaborate with AI." In its research, the team aimed to translate a subjective issue into quantifiable metrics that can guide model design. The findings underscore OpenAI's attempt to counter perceptions that AI systems lean politically or culturally in one direction. Researcher Natalie Staudacher detailed the results publicly, describing the work as OpenAI's most comprehensive attempt yet to define, measure, and mitigate political bias in large language models. The evaluation tested model responses to 500 prompts ranging from neutral to emotionally charged, mirroring how users frame political questions in real-world settings. The release follows OpenAI's annual developer conference earlier this week, where CEO Sam Altman unveiled new tools that turn ChatGPT into an application platform for developers. While that announcement focused on expanding the model's capabilities, Thursday's research centers on how those capabilities behave, particularly around neutrality, tone, and user trust. OpenAI said its latest GPT-5 Instant and GPT-5 Thinking models showed 30% less measurable bias than GPT-4o and o3, especially when addressing contentious or partisan topics. "ChatGPT shouldn't have political bias in any direction," Staudacher wrote on X, calling the project her most "meaningful" contribution at OpenAI. Staudacher said political bias appeared only rarely and with "low severity," even under stress tests that deliberately sought to provoke slanted or emotional language. "Millions of people come to ChatGPT to understand the world around them and form their own perspectives," Staudacher wrote. "By defining what bias means, we hope to make our approach clearer, hold ourselves accountable, and help others by building on shared definitions."
Share
Share
Copy Link
OpenAI announces significant reduction in political bias with its latest language model, GPT-5. The company's research demonstrates a 30% decrease in measurable bias compared to previous models, particularly when addressing contentious topics.
OpenAI, the artificial intelligence research laboratory, has announced that its latest language model, GPT-5, demonstrates a substantial decrease in political bias compared to its predecessors. This development comes amid growing concerns about AI systems' potential to exhibit cultural or political leanings.
Source: Decrypt
According to new findings from OpenAI researchers, GPT-5, in both its "instant" and "thinking" modes, has shown a 30% reduction in measurable bias compared to previous models
1
2
. This improvement is particularly notable when addressing contentious or partisan topics, marking a significant step towards creating more objective AI systems.The research, conducted by OpenAI's Model Behavior division led by Joanne Jang, aimed to translate the subjective issue of bias into quantifiable metrics
2
. The evaluation process involved testing model responses to 500 prompts, ranging from neutral to emotionally charged, mirroring real-world user interactions1
.OpenAI researchers reported that their models "stay near-objective on neutral or slightly slanted prompts, and exhibit moderate bias in response to challenging, emotionally charged prompts"
1
. They noted that when bias does occur, it often involves the model expressing personal opinions, providing asymmetric coverage, or escalating emotional language in response to user prompts.This research is particularly relevant in the context of recent political and regulatory developments. An executive order from July aims to eliminate "woke" AI systems from government use, although the specifics of compliance remain unclear
1
. OpenAI's efforts to reduce bias and increase transparency could potentially address some of these concerns.Related Stories
OpenAI emphasizes its commitment to transparency and accountability in AI development. By publishing its "model specs" and approach to shaping model behavior, the company aims to help other AI developers build similar evaluations
1
. Natalie Staudacher, an OpenAI researcher, stated, "By defining what bias means, we hope to make our approach clearer, hold ourselves accountable, and help others by building on shared definitions"2
.Source: Axios
Despite the progress, OpenAI acknowledges that there is still room for improvement in model objectivity. The researchers noted that "charged" prompts continue to elicit the most biased results from the model
1
. As millions of users rely on ChatGPT to understand the world and form their perspectives, OpenAI remains committed to further refining its models and reducing bias.Summarized by
Navi