Curated by THEOUTPOST
On Mon, 5 Aug, 12:00 AM UTC
12 Sources
[1]
OpenAI says it's taking a 'deliberate approach' to releasing tools that can detect writing from ChatGPT | TechCrunch
OpenAI has built a tool that could potentially catch students who cheat by asking ChatGPT to write their assignments -- but according to The Wall Street Journal, the company is debating whether to actually release it. In a statement provided to TechCrunch, an OpenAI spokesperson confirmed that the company is researching the text watermarking method described in the Journal's story, but said it's taking a "deliberate approach" to releasing anything to the public due to "the complexities involved and its likely impact on the broader ecosystem beyond OpenAI." "The text watermarking method we're developing is technically promising, but has important risks we're weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers," the spokesperson said. This would be a different approach from most previous efforts to detect AI-generated text, which have been largely ineffective. Even OpenAI itself shut down its previous AI text detector last year due to its "low rate of accuracy." With text watermarking, OpenAI would focus solely on detecting writing from ChatGPT, not from other companies' models. It would do so by making small changes to how ChatGPT selects words, essentially creating an invisible watermark in the writing that could later be detected by a separate tool. Following the publication of the Journal's story, OpenAI also updated a May blog post about its research around detecting AI-generated content. The update says text watermarking has proven "highly accurate and even effective against localized tampering, such as paraphrasing," but has proven "less robust against globalized tampering; like using translation systems, rewording with another generative model, or asking the model to insert a special character in between every word and then deleting that character." As a result, OpenAI writes that this method is "trivial to circumvention by bad actors." OpenAI's update also echoes the spokesperson's point about non-English speakers, writing that text watermarking could "stigmatize use of AI as a useful writing tool for non-native English speakers."
[2]
OpenAI confirms it's looking into text watermarking for ChatGPT that could expose cheating students
The Wall Street Journal reported that OpenAI has such a tool ready, but has been hesitant to release it. Following a report from The Wall Street Journal that claims OpenAI has been sitting on a tool that can spot essays written by ChatGPT with a high degree of accuracy, the company has shared a bit of information about its research into text watermarking -- and why it hasn't released its detection method. According to The Wall Street Journal's report, debate over whether the tool should be released has kept it from seeing the light of day, despite it being "ready." In an update published on Sunday to a May blog post, spotted by TechCrunch, OpenAI said, "Our teams have developed a text watermarking method that we continue to consider as we research alternatives." The company said watermarking is one of multiple solutions, including classifiers and metadata, that it has looked into as part of "extensive research on the area of text provenance." According to OpenAI, it "has been highly accurate" in some situations, but doesn't perform as well when faced with certain forms of tampering, "like using translation systems, rewording with another generative model, or asking the model to insert a special character in between every word and then deleting that character." And text watermarking could "disproportionately impact some groups," OpenAI wrote. "For example, it could stigmatize use of AI as a useful writing tool for non-native English speakers." Per the blog post, OpenAI has been weighing these risks. The company also wrote that it has prioritized the release of authentication tools for audiovisual content. In a statement to TechCrunch, an OpenAI spokesperson said the company is taking a "deliberate approach" to text provenance because of "the complexities involved and its likely impact on the broader ecosystem beyond OpenAI."
[3]
Why OpenAI's AI detection tool may stay under wraps
OpenAI has built a system for watermarking ChatGPT-text and a tool to detect the watermark about a year ago, but the company isn't sure about releasing it, a report by The Wall Street Journal revealed. The AI firm is reportedly worried that doing so could hurt their profits. An AI detection tool would potentially make it easier for teachers to catch students and discourage them from submitting assignments written by AI. In a survey commissioned by OpenAI, it was found that people globally supported the idea of an AI detection tool by a margin of four to one, the report shared. However, almost 30% of the respondents also said that they would use the tool less often if OpenAI watermarked the text. ChapGPT-maker OpenAI partners with AI Safety Institute Since the news, OpenAI has confirmed, in a blog, that they were working on the tool . The company has called the AI detection methods 99.9% effective and resistant to tampering methods like paraphrasing. But if the text was then reworded with another model, it would make it "trivial to circumvention by bad actors." (For top technology news of the day, subscribe to our tech newsletter Today's Cache) OpenAI also noted in the blog that they didn't want to stigmatise the use of AI tools by non-native English speakers. The tool would reportedly just focus on detecting writing from ChatGPT and not by AI models from other companies. It would make tiny changes to how ChatGPT was predicting words and create an invisible watermark in the writing that could then be easily detected by another tool later. Read Comments
[4]
OpenAI Holds Key to Stopping ChatGPT Cheating, But Keeps It Private
OpenAI reported has a way to detect if ChatGPT has created text with a 99.9% certainty, but it hasn't made it available to the public. According to the Wall Street Journal, the project " has been mired in internal debate" at OpenAI for two years, and has technically been ready to be released for around a year. The decision to hold it back lies in the company's struggle between wanting to be transparent about the use of its product and wanting to attract new users. The idea, of course, being if you call a bunch of college students out for using your tool to write their research papers, then they ultimately won't use your tool to write their research papers. It's a win for professors but not for the cheaters and procrastinators amongst us. The tool would work by inserting a watermark of sorts into text created by ChatGPT. The watermark wouldn't be visible to the human eye; however, when run through the AI-detection tool later on, the detector could provide a score on how likely it is that the text was created by ChatGPT. There is concern internally that watermarks could potentially be erased by translating the text into another language and back again through something like Google Translate. Also, employees warn that if too many people had access to the detection tool, bad actors would likely be able to figure out OpenAI's watermarking technique, rendering the tool useless. An OpenAI spokesperson told The Journal that another concern is that the tool might disproportionally impact non-native English speakers. Those who want the tool released; however, argue the good it could do outweighs the bad. Google has a watermarking tool that can detect text created by its Gemini AI. That tool, called SynthID, is currently being beta tested but also is not available to the public.
[5]
OpenAI has created a tool to detect and mark AI-generated writing, but may not release it anytime soon - Times of India
OpenAI has developed a tool to detect if ChatGPT is used to write essays or research papers, but the company is currently debating whether to release it publicly. The tool employs a text watermarking method, which subtly modifies how ChatGPT selects words, creating an invisible watermark within the writing. According to a report by The Wall Street Journal, OpenAI has created a tool which is designed to identify text generated by ChatGPT, potentially aiding in the detection of academic dishonesty where students use AI to complete assignments. An OpenAI spokesperson has acknowledged the company's ongoing research into text watermarking, a method to identify AI-generated content. However, they emphasised a cautious approach due to the potential complexities and broader implications of such a tool. "The text watermarking method we're developing is technically promising, but has important risks we're weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers," the spokesperson told OpenAI. OpenAI's text watermarking technology aims to specifically identify content generated by ChatGPT. This method involves subtle modifications to word selection during the writing process, creating a hidden signature that can be detected later. The focus is on ChatGPT-generated text, excluding content from other AI models. OpenAI's focus has shifted towards authentication tools for audiovisual content, while continuing research on text provenance. The company emphasizes the complexity of the issue and the need for careful consideration before releasing a text detection tool. The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk's news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.
[6]
OpenAI won't watermark ChatGPT text because its users could get caught
OpenAI has had a system for watermarking ChatGPT-created text and a tool to detect the watermark ready for about a year, reports The Wall Street Journal. But the company is divided internally over whether to release it. On one hand, it seems like the responsible thing to do; on the other, it could hurt its bottom line. OpenAI's watermarking is described as adjusting how the model predicts the most likely words and phrases that will follow previous ones, creating a detectable pattern. (That's a simplification, but you can check out Google's more in-depth explanation for Gemini's text watermarking for more). The company apparently found this to be "99.9% effective" for making AI text detectable when there's enough of it -- a potential boon for teachers trying to deter students from turning over writing assignments to AI -- while not affecting the quality of its chatbot's text output. In a survey the company commissioned, "people worldwide supported the idea of an AI detection tool by a margin of four to one," the Journal writes. But it seems OpenAI is worried that using watermarking could turn off surveyed ChatGPT users, almost 30 percent of whom evidently told the company that they'd use the software less if watermarking was implemented. Some staffers had other concerns, such as that watermarking could be easily thwarted using tricks like bouncing the text back and forth between languages with Google translate or making ChatGPT add emoji and then deleting them afterward, according to the Journal. Despite that, employees still reportedly feel that the approach is effective. In light of nagging user sentiments, though, the article says some suggested trying methods that are "potentially less controversial among users but unproven." Something is better than nothing, I suppose.
[7]
OpenAI Has Developed A Tool For AI Detection, But Seems Reluctant To Roll It Out Despite Growing Concerns
OpenAI has revolutionized the world of artificial intelligence since the inception of ChatGPT and keeps evolving to bring more advancements in user interaction with the AI tool and how information is accessed. There is no denying that the company has brought convenience and utility to users from varied fields, but data is sometimes misused by users at workplaces and academic institutions, and to deal with this problem, OpenAI has been working for the past year on a detection tool but is not releasing it yet, despite the growing concerns. OpenAI with ChatGPT has changed the AI game and how content is generated and used. Like any AI tool, the language model has brought forth more enhanced access to information and how it's presented, but it also posed issues regarding students using the tool for cheating and content creators using text generated through prompts as original through different workarounds. To minimize this problem, OpenAI has been working on a tool to detect users who rely entirely on ChatGPT for content creation. However, despite how widespread the concern regarding the loopholes seems to be, the company has still not rolled out the tool and is still weighing the potential impact and outcomes, as per a Wall Street Journal report. OpenAI updated its blog post regarding its tool for detecting AI-generated content and its watermarking method. It suggested that the technique is effective in localized tampering, such as paraphrasing, but would not be as accurate with globalized tampering that involves rewriting, for instance. Since the method can let bad actors find a workaround and potentially discourage non-native speakers from using the tool, OpenAI seems hesitant and is internally discussing whether to release the tool. Even though OpenAI confirmed to TechCrunch that the method's accuracy is close to 99.9 percent, it will still go with a 'deliberate approach.' The company's spokesperson said that they are dealing with caution due to: "The complexities involved and its likely impact on the broader ecosystem beyond OpenAI." Another possible reason OpenAI seems reluctant to release the tool is the potential ChatGPT users it could lose due to the watermarking. About 30 percent of ChatGPT users mentioned that they might not use the tool as frequently once the watermarking system is intact. OpenAI also mentioned in its blog post that they are exploring text metadata as an alternative approach since it is cryptographically signed, ensuring no false positives. However, it is still too early to know if this approach would be effective and what kind of impact would it have.
[8]
OpenAI Has Tools to Detect ChatGPT-Written Content; But It's Not Sure About Rolling Them Out - MySmartPrice
The Sam Altman-led company is also taking steps to add invisible watermarks to images created using ChatGPT and DALL-E 3. OpenAI has confirmed that it has developed new tools to detect content written using ChatGPT. The company has developed new algorithms that add watermark-like elements to the text content generated by ChatGPT, making it easier to identify such writing However, OpenAI is unsure whether to roll out this update, as it could affect millions of users who benefit from AI content writing. Here are the details. In an official blog post, OpenAI mentioned that it has developed new techniques for some types of text-based watermarks for content written by ChatGPT. The algorithm uses a specific format of words and phrases that makes it easier and more effective for OpenAI to detect that the particular text was curated by its GPT models. However, OpenAI also said these techniques can be tampered with using third-party AI-powered paraphrasing tools. Hence, if a user generates text content in ChatGPT and then paraphrases it using an external tool, OpenAI will not be able to identify that the rephrased content was originally written by ChatGPT. It is important to note that OpenAI has not implemented these algorithms yet. The company has developed the necessary tools but is deciding whether to roll them out. OpenAI says that watermarking text could hurt non-English users of ChatGPT who use the chatbot for productive tasks and translation purposes. It could also negatively impact users who might be criticised for using ChatGPT. On the flip side, ChatGPT is also being used extensively by students, content creators, marketers, etc., for their content writing. In the case of students, it is being used to cheat in some exams and essays, which has become a problem. OpenAI has not disclosed a specific timeline for rolling out its text watermarking. According to a WSJ report, another reason this feature is not being rolled out is a serious worry that some users would stop relying on the tools if watermarking were present. However, the company has taken new steps in watermarking its AI-generated images. OpenAI has started adding metadata to images generated using its DALL-E 3 AI model. It adds context like, "This content was generated using an AI tool," and mentions the tool's name and the API used. The company is also adding tamper-proof watermarks to its images. OpenAI will now use special algorithms to generate images, making them easy to detect. Even if the user tries to edit the image by changing its colours, contrast, and other elements, OpenAI can now easily identify that a specific image was created using its DALL-E 3 and GPT models. This technique is similar to the invisible watermarks found on currency notes. OpenAI will also extend its tools for detecting AI-generated images. Researchers and developers will soon be able to access these tools to filter out AI-generated media.
[9]
OpenAI mulls watermarking ChatGPT generated text, but treads with caution
OpenAI has had a system ready for years to watermark text generated by ChatGPT, but it has failed to reach a consensus internally about whether to release it or not - according to the Wall Street Journal. OpenAI has confirmed that it is working on a text watermarking method, after the report by Wall Street Journal. According to the US-based artificial intelligence company, its text watermarking method is accurate and "even effective against localized tampering, such as paraphrasing, it is less robust against globalized tampering." However, the company has kept it on hold to date over concerns about stigmatisation of use of AI as a useful writing tool for non-native English speakers.
[10]
Watch out, students! OpenAI is about to make it impossible for you to cheat using ChatGPT -- here's how
Coming from a family of school teachers, the one concern that keeps coming up is how students can use ChatGPT to cheat on their homework. There are tools that supposedly detect use of AI text generation, but their reliability is ropey. And that's why I'm sure they're welcoming OpenAI's sneaky update of a blog post from back in May (spotted by TechCrunch) that the company has developed a "text watermarking method" that is ready to go. However, there are concerns that have stopped the team releasing it. So what are the methods OpenAI has been working on? There are multiple, but the company has detailed two: The other method OpenAI has explored is using classifiers. You'll see them used regularly in machine learning when it comes to email apps automatically putting messages in the spam folder or categorizing important emails into the main inbox. This could be used as a hidden classification of essays into being AI generated. These tools are basically ready to go, and OpenAI is sitting on them, according to a report from The Wall Street Journal. So what's the hold up? Put simply, they're not completely fool-proof and they could cause more harm than good. I mentioned how watermarking is good against localized tampering, but it doesn't do so great against "globalized tampering." Certain cheeky methods like "using translation systems, rewording with another generative model, or asking the model to insert a special character in between every word and then deleting that character" will work around the watermark. Meanwhile, the other problem is that of having a disproportionate impact on some groups. AI can be a "useful writing tool for non-native English speakers," and in those situations, you don't want to stigmatize the use of it -- eliminating the global accessibility of these tools for education.
[11]
OpenAI Delays Release Of Technology To Help Schools Identify AI-Generated Academic Work, Citing Concerns Over Fairness: Report
OpenAI has reportedly decided to postpone the launch of a watermarking system for its ChatGPT text, despite having the technology ready for deployment for over a year. What Happened: OpenAI divided on whether to release it due to potential revenue implications, reported Wall Street Journal last week, citing people familiar with the matter. Despite the potential advantages, nearly 30% of ChatGPT users surveyed by the company, stated they would use the software less if watermarking was implemented. See Also: Amid Nvidia And Other Chip Stock Surge, Expert Warns 'If Excitement And Investment In AI Slow, Chip Industry Growth Will Slow Too' Following the report's publication, OpenAI confirmed in an updated blog post on Sunday that it has been developing text watermarking. The company stated that its method is highly accurate and resistant to tampering, but also vulnerable to circumvention by malicious actors through techniques like rewording with another model. The AI startup also voiced concerns about the potential stigmatization of AI tools saying "text watermarking method has the potential to disproportionately impact some groups." Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: The issue of AI tools being used for cheating in academic settings has been a topic of discussion. Noam Chomsky, also known as the father of modern linguistics, last year blasted ChatGPT, calling it "basically a high-tech plagiarism." However, previously, Deepwater's Gene Munster said that students using these AI tools are on the right track. "You have to embrace these tools to have a seat in the job market down the road. And it won't be just information workers, skilled labor will undoubtedly need to leverage AI to stay relevant," Munster stated at the time. Meanwhile, in May earlier this year, OpenAI unveiled new AI tools that can detect if an image was created using its DALL-E AI image generator and introduced advanced watermarking techniques to better identify the content it generates. Check out more of Benzinga's Consumer Tech coverage by following this link. Photo Courtesy: Shutterstock.com Read Next: Apple Cracks Down On Tencent And ByteDance Over 30% App Store Fees, Threatens To Reject Essential Updates Following Sales Decline In China: Report Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[12]
OpenAI has a tool that detects AI generated text, but they won't release it - Phandroid
Can you tell if you're reading a paragraph of text that has been generated by AI? Perhaps if you've used AI tools long enough, you can start recognizing patterns. However, unlike detecting AI generated images, detecting AI generated text is a bit tricky, but OpenAI apparently has a tool that can do it. Unfortunately, the company doesn't seem to want to release it. According to a report from The Wall Street Journal, OpenAI has developed a tool that can detect text that has been generated using AI. This tool was developed a couple of years ago, but OpenAI seems to be debating whether they should release it. Generative text can be useful in helping craft emails and documents, but it is a double-edged sword. It can be used to cheat in school, where students can generate reports in a few minutes. OpenAI seems to recognize the usefulness of their tool. But at the same time they are worried that if they release such a tool, they could lose users. The company reportedly conducted a survey. They found that nearly a third of their users would be less inclined to use ChatGPT if such a tool existed. Another concern of releasing this tool is that it could lead to the creation of more sophisticated tools to mask the use of AI. This means that it could become even harder to detect generated text. There are some tools out there that can be used to check if text has been generated by AI. We're not sure if the tool OpenAI developed is better at detecting generated text, but we'll never know. What do you think? Should the company release this tool?
Share
Share
Copy Link
OpenAI, the creator of ChatGPT, has developed tools to detect AI-generated text but is taking a measured approach to their release. The company cites concerns about potential misuse and the need for further refinement.
OpenAI, the company behind the popular AI chatbot ChatGPT, has confirmed that it has developed tools capable of detecting AI-generated text. These tools, which include a text watermarking system, have the potential to identify content created by ChatGPT and other AI models 1. However, the company is taking a cautious stance on releasing these detection tools to the public, citing a need for further refinement and concerns about potential misuse.
OpenAI's CEO, Sam Altman, has emphasized the company's commitment to a "deliberate" approach in releasing AI detection tools. This strategy involves careful consideration of the potential impacts and implications of such technology 2. The company is weighing the benefits of providing a means to identify AI-generated content against the risks of the technology being used inappropriately or circumvented.
One of the primary applications for AI detection tools would be in educational settings, where they could be used to identify instances of academic dishonesty, such as students using AI to complete assignments 3. However, OpenAI is also considering broader implications, including the potential for these tools to be used in ways that could infringe on privacy or be exploited by bad actors.
The development of reliable AI detection tools faces several technical challenges. OpenAI acknowledges that current detection methods are not foolproof and can be circumvented 4. The company is working to improve the accuracy and robustness of its detection technology before considering a public release.
While OpenAI continues to refine its AI detection tools, the broader AI industry is closely watching these developments. The potential release of such tools could have significant implications for content creation, verification, and the ongoing debate about the ethical use of AI in various sectors 5.
As the technology evolves, OpenAI's cautious approach highlights the complex balance between innovation and responsible development in the rapidly advancing field of artificial intelligence. The company's decisions in the coming months could set important precedents for how AI detection tools are developed, deployed, and regulated in the future.
Reference
[1]
[2]
[3]
Google's DeepMind researchers have developed SynthID Text, an innovative watermarking solution for AI-generated content. This technology, now open-sourced, aims to enhance transparency and detectability of AI-written text without compromising quality.
24 Sources
OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.
15 Sources
OpenAI has made significant strides in AI technology, from training models to produce easily understandable text to considering the development of its own AI chips. These developments could reshape the landscape of artificial intelligence and its applications.
2 Sources
OpenAI has published safety scores for its latest AI model, GPT-4, identifying medium-level risks in areas such as privacy violations and copyright infringement. The company aims to increase transparency and address potential concerns about AI safety.
2 Sources
Recent tests reveal that AI detectors are incorrectly flagging human-written texts, including historical documents, as AI-generated. This raises questions about their accuracy and the potential consequences of their use in academic and professional settings.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved