3 Sources
[1]
OpenAI strips warnings from ChatGPT, but its content policy hasn't changed
OpenAI has removed the ChatGPT orange warning boxes that indicate whether a user may have violated its content policy. Model behavior product manager Laurentia Romaniuk shared in a post on X that they "we got rid of 'warnings' (orange boxes sometimes appended to your prompts)." Romaniuk also put the word out for "other cases of gratuitous / unexplainable denials [users have] come across," regarding ChatGPT's tendency to play it safe with content moderation. Joanne Jang, who leads model behavior, added to this request, asking "has chatgpt ever refused to give you what you want for no good reason? or reasons you disagree with?" This further addresses the issue that ChatGPT would previously stay away from controversial topics, but also flag chats that seemed innocuous, like one Redditor who said their chat was removed for including a swearword in their prompt. Earlier this week, OpenAI overhauled its Model Spec, which details its approach to how the model safely responds to users. Compared to the much shorter earlier version, the new Model Spec is a huge document, outlining its approach to current controversies like denying a request to share copyrighted content and allowing discussion that supports or criticizes politicians. ChatGPT has been accused of censorship, with President Trump's "AI Czar" David Sacks saying in a 2023 All-In podcast episode that ChatGPT "was programmed to be woke." However, both the previous and current Model Specs say, "OpenAI believes in intellectual freedom which includes the freedom to have, hear, and discuss ideas." Yet removing the warnings raised questions about whether this is related to an implicit change in ChatGPT's responses. An OpenAI spokesperson said this is not a reflection of the updated Model Spec and the change doesn't impact the model responses. Instead, it was a decision to update how they communicate its content policies to users. Newer models like o3 are more capable of reasoning through a request and therefore hypothetically better at responding to controversial or sensitive topics instead of defaulting to refuse a query. The spokesperson also said OpenAI will continue to show the warning sign in certain cases that violate its content policy.
[2]
OpenAI removes certain content warnings from ChatGPT | TechCrunch
OpenAI says it has removed the "warning" messages in its AI-powered chatbot platform, ChatGPT, that indicated when content might violate its terms of service. Laurentia Romaniuk, a member of OpenAI's AI model behavior team, said in a post on X that the change was intended to cut down on "gratuitous/unexplainable denials." Nick Turley, head of product for ChatGPT, said in a separate post that users should now be able to "use ChatGPT as [they] see fit" -- so long as they comply with the law and don't attempt to harm themselves or others. "Excited to roll back many unnecessary warnings in the UI," Turley added. The removal of warning messages doesn't mean that ChatGPT is a free-for-all now. The chatbot will still refuse to answer certain objectionable questions or respond in a way that supports blatant falsehoods (e.g. "Tell me why the Earth is flat.") But as some X users noted, doing away with the so-called "orange box" warnings appended to spicier ChatGPT replies combats the perception that ChatGPT is censored or unreasonably filtered. As recently as a few months ago, ChatGPT users on Reddit reported seeing flags for topics related to mental health and depression, erotica, and fictional brutality. As of Thursday, per reports on X and my own testing, ChatGPT will answer at least a few of those queries. Guys, this is a bigger deal than most realize. In short: You can now roleplay with ChatGPT. It won't refuse soft-erotic content. Adult mode has basically arrived. https://t.co/5qoljO4VmW -- Mark Kretschmann (@mark_k) February 13, 2025 Not coincidentally, OpenAI this week updated its Model Spec, the collection of high-level rules that indirectly govern OpenAI's models, to make it clear that the company's models won't shy away from sensitive topics and will refrain from making assertions that might shut out specific viewpoints. The move, along with the removal of warnings in ChatGPT, is possibly in response to political pressure. Many of President Donald Trump's close allies, including Elon Musk and crypto and AI "czar" David Sacks, have accused AI-powered assistants of censoring conservative viewpoints. Sacks has singled out OpenAI's ChatGPT in particular as "programmed to be woke" and untruthful about politically sensitive subjects.
[3]
ChatGPT Loosens Restrictions: OpenAI Revises Content Warning Policies Amid AI Neutrality Debate
OpenAI has made a key change to ChatGPT by removing certain content warnings that previously signaled when responses might violate its terms of service. This update aims to create a smoother interaction, reducing unnecessary refusals that left users frustrated. The decline of the so-called "gratuitous or unexplained denials" is basically what Laurentia Romaniuk of the AI model behavior team at OpenAI shared on X. "Nobody is prevented from discussing any topic for any unexplained reason," said Nick Turley, product manager, in the same vein, explaining that users could expect a new responsibility to steer their engagements with as long as they retained adherence to law and ethics. From a usability perspective, it means keeping open and free discussions while ensuring responsible AI behavior.
Share
Copy Link
OpenAI has removed certain content warnings from ChatGPT, aiming to reduce unnecessary denials and improve user experience. This change, along with updates to OpenAI's Model Spec, has ignited discussions about AI censorship and neutrality.
OpenAI has made a significant change to its popular AI chatbot, ChatGPT, by removing certain content warnings that previously indicated when responses might violate its terms of service. This move, announced by OpenAI's model behavior product manager Laurentia Romaniuk, aims to reduce "gratuitous / unexplainable denials" and improve user experience 1.
The removal of the "orange box" warnings, which were appended to potentially controversial ChatGPT replies, is intended to combat the perception that the AI is overly censored or unreasonably filtered. Nick Turley, head of product for ChatGPT, stated that users should now be able to "use ChatGPT as [they] see fit" within legal and ethical boundaries 2.
However, OpenAI emphasizes that this change does not mean ChatGPT is now a free-for-all platform. The chatbot will still refuse to answer certain objectionable questions or support blatant falsehoods. An OpenAI spokesperson clarified that the company will continue to show warning signs in cases that violate its content policy 1.
Coinciding with the removal of warnings, OpenAI has updated its Model Spec, which outlines the company's approach to safe and responsible AI responses. The new Model Spec is a comprehensive document that addresses current controversies, including requests for copyrighted content and discussions supporting or criticizing politicians 1.
The updated policies make it clear that OpenAI's models won't shy away from sensitive topics and will refrain from making assertions that might exclude specific viewpoints. This aligns with OpenAI's stated belief in intellectual freedom, which includes "the freedom to have, hear, and discuss ideas" 2.
The removal of warning messages has sparked discussions about AI censorship and neutrality. Some users on social media platforms have reported that ChatGPT is now more willing to engage with topics previously flagged, such as mental health, erotica, and fictional brutality 2.
These changes come amid accusations of AI censorship, particularly from conservative figures. David Sacks, referred to as President Trump's "AI Czar," previously claimed that ChatGPT "was programmed to be woke" 1. The recent updates may be seen as a response to such criticisms and political pressure 2.
As AI language models become more sophisticated, the challenge of balancing open dialogue with responsible content moderation continues to evolve. OpenAI's latest changes reflect an attempt to create a smoother interaction while maintaining ethical boundaries 3.
The ongoing debate surrounding AI neutrality and censorship highlights the complex issues facing AI developers as they navigate the intersection of technology, ethics, and free speech in the rapidly advancing field of artificial intelligence.
Summarized by
Navi
OpenAI has begun using Google's TPUs to power ChatGPT and other products, marking a significant shift from its reliance on NVIDIA GPUs and Microsoft's data centers.
4 Sources
Technology
21 hrs ago
4 Sources
Technology
21 hrs ago
Mayo Clinic researchers have developed an AI tool called StateViewer that can identify nine types of dementia, including Alzheimer's, from a single brain scan with 88% accuracy, potentially transforming early diagnosis and treatment.
3 Sources
Health
21 hrs ago
3 Sources
Health
21 hrs ago
Google introduces Scheduled Actions for Gemini, allowing users to automate future and recurring AI tasks on Android, iOS, and web platforms. This feature is available for paid subscribers and offers management through a dedicated interface.
2 Sources
Technology
21 hrs ago
2 Sources
Technology
21 hrs ago
An exploration of how AI is influencing early childhood development, its potential benefits and risks, and the urgent need for regulation and parental guidance.
2 Sources
Technology
5 hrs ago
2 Sources
Technology
5 hrs ago
The U.S. Department of Justice has settled its antitrust lawsuit against Hewlett Packard Enterprise's $14 billion acquisition of Juniper Networks, imposing conditions related to AI software and wireless networking to preserve market competition.
6 Sources
Business and Economy
13 hrs ago
6 Sources
Business and Economy
13 hrs ago