3 Sources
3 Sources
[1]
ChatGPT made up a product feature out of thin air, so this company created it
On Monday, sheet music platform Soundslice says it developed a new feature after discovering that ChatGPT was incorrectly telling users the service could import ASCII tablature -- a text-based guitar notation format the company had never supported. The incident reportedly marks what might be the first case of a business building functionality in direct response to an AI model's confabulation. Typically, Soundslice digitizes sheet music from photos or PDFs and syncs the notation with audio or video recordings, allowing musicians to see the music scroll by as they hear it played. The platform also includes tools for slowing down playback and practicing difficult passages. Adrian Holovaty, co-founder of Soundslice, wrote in a recent blog post that the recent feature development process began as a complete mystery. A few months ago, Holovaty began noticing unusual activity in the company's error logs. Instead of typical sheet music uploads, users were submitting screenshots of ChatGPT conversations containing ASCII tablature -- simple text representations of guitar music that look like strings with numbers indicating fret positions. "Our scanning system wasn't intended to support this style of notation," wrote Holovaty in the blog post. "Why, then, were we being bombarded with so many ASCII tab ChatGPT screenshots? I was mystified for weeks -- until I messed around with ChatGPT myself." When Holovaty tested ChatGPT, he discovered the source of the confusion: The AI model was instructing users to create Soundslice accounts and use the platform to import ASCII tabs for audio playback -- a feature that didn't exist. "We've never supported ASCII tab; ChatGPT was outright lying to people," Holovaty wrote. "And making us look bad in the process, setting false expectations about our service." When AI models like ChatGPT generate false information with apparent confidence, AI researchers call it a "hallucination" or "confabulation." The problem of AI models confabulating false information has plagued AI models since ChatGPT's public release in November 2022, when people began erroneously using the chatbot as a replacement for a search engine. As prediction machines, large language models trained on massive text datasets can easily produce outputs that seem plausible but are completely inaccurate. The models statistically improvise to fill "knowledge" gaps on topics poorly represented in their training data, generating text based on statistical patterns rather than factual accuracy. In this way, ChatGPT told its users what they wanted to hear by making up a Soundslice feature that made sense but didn't exist. Usually, confabulations get people in trouble. In one notable case from 2023, lawyers faced sanctions after submitting legal briefs containing ChatGPT-generated citations to non-existent court cases. In February 2024, Canada's Civil Resolution Tribunal ordered Air Canada to pay damages to a customer and honor a bereavement fare policy that was hallucinated by a support chatbot, which incorrectly stated that customers could retroactively request a bereavement discount within 90 days of the date the ticket was issued. From bug to feature The discovery presented Soundslice with an unusual dilemma. The company could have posted disclaimers warning users to ignore ChatGPT's claims, but instead chose a different path. "We ended up deciding: what the heck, we might as well meet the market demand," Holovaty explained. The team built an ASCII tab importer -- a feature that had been "near the bottom of my 'Software I expected to write in 2025' list" -- and updated their user interface to inform users about the new capability. Soundslice's solution presents an interesting case of making lemonade from lemons, but for Holovaty, the situation raises philosophical questions about product development. "My feelings on this are conflicted," he wrote. "I'm happy to add a tool that helps people. But I feel like our hand was forced in a weird way. Should we really be developing features in response to misinformation?"
[2]
ChatGPT hallucinated about music app Soundslice so often, the founder made the lie come true | TechCrunch
Earlier this month, Adrian Holovaty, founder of music-teaching platform Soundslice, solved a mystery that had been plaguing him for weeks. Weird images of what were clearly ChatGPT sessions kept being uploaded to the site. Once he solved it, he realized that ChatGPT had become one of his company's greatest hype men - but it was also lying to people about what his app could do. Holovaty is best known as one of the creators of the open-source Django project, a popular Python web development framework (though he retired from managing the project in 2014). In 2012, he launched Soundslice, which remains "proudly bootstrapped," he tells TechCrunch. Currently, he's focused on his music career both as an artist and as a founder. Soundslice is an app for teaching music, used by students and teachers. It's known for its video player synchronized to the music notations that guide users on how the notes should be played. It also offers a feature called "sheet music scanner" that allows users to upload an image of paper sheet music and, using AI, will automatically turn that into an interactive sheet, complete with notations. Holovaty carefully watches this feature's error logs to see what problems occur, where to add improvements, he said. That's where he started seeing the uploaded ChatGPT sessions. They were creating a bunch of error logs. Instead of images of sheet music, these were images of words and a box of symbols known as ASCII tablature. That's a basic text-based system used for guitar notations that uses a regular keyboard. (There's no treble key, for instance, on your standard QWERTY keyboard.) The volume of these ChatGPT session images was not so onerous that it was costing his company money to store them and crushing his app's bandwidth, Holovaty said. He was baffled, he wrote in a blog post about the situation. "Our scanning system wasn't intended to support this style of notation. Why, then, were we being bombarded with so many ASCII tab ChatGPT screenshots? I was mystified for weeks -- until I messed around with ChatGPT myself." That's how he saw ChatGPT telling people they could hear this music by opening a Soundslice account and uploading the image of the chat session. Only, they couldn't. Uploading those images wouldn't translate the ASCII tab into audio notes. He was struck with a new problem. "The main cost was reputational: new Soundslice users were going in with a false expectation. They'd been confidently told we would do something that we don't actually do," he described to TechCrunch. He and his team discussed their options: Slap disclaimers all over the site about it -- "No, we can't turn a ChatGPT session into hearable music" -- or build that feature into the scanner, even though he had never before considered supporting that offbeat musical notation system. He opted to build the feature. "My feelings on this are conflicted. I'm happy to add a tool that helps people. But I feel like our hand was forced in a weird way. Should we really be developing features in response to misinformation?" he wrote. He also wondered if this was the first documented case of a company having to develop a feature because ChatGPT kept repeating, to many people, its hallucination about it. The fellow programmers on Hacker News had an interesting take about it: Several of them said that it's no different than an over-eager human salesperson promising the world to prospects and then forcing developers to deliver new features. "I think that's a very apt and amusing comparison!" Holovaty agreed.
[3]
ChatGPT Sent Users to a Website for a Feature It Didn't Have -- So the Founder Built It - Decrypt
Holovaty acknowledged the move raised concerns about how AI misinformation may influence product decisions. What do you do when your website is bombarded with uploads it can't process? That's the situation software developer and musician Adrian Holovaty found himself in when he noticed a strange surge in failed uploads to his company's sheet music scanner. What he didn't expect was that the culprit was allegedly ChatGPT. In a recent blog post, the Soundslice co-founder explained that he was looking at error logs when he discovered that ChatGPT was instructing users to upload ASCII "tabs" -- a simple musical format used by guitarists and others in lieu of musical notation -- into Soundslice to hear audio playback. The problem was, the feature did not exist. So Holovaty decided to build it. "To my knowledge, this is the first case of a company developing a feature because ChatGPT is incorrectly telling people it exists," Holovaty wrote. Launched in 2012, Soundslice is an interactive music learning and sharing platform that digitizes sheet music from photographs or PDFs. "Our scanning system wasn't intended to support this style of notation," Holovaty wrote. "Why, then, were we being bombarded with so many ASCII tab ChatGPT screenshots? I was mystified for weeks -- until I messed around with ChatGPT myself." "We've never supported ASCII tab; ChatGPT was outright lying to people. And making us look bad in the process, setting false expectations about our service." The phenomenon of AI hallucinations is commonplace. Since the public launch of ChatGPT in 2022, numerous instances of chatbots, including ChatGPT, Google Gemini, and Anthropic's Claude AI, have presented false or misleading information as fact. While OpenAI did not mention Holovaty's claims, the company acknowledged that hallucinations are still a concern. "Addressing hallucinations is an ongoing area of research," an OpenAI spokesperson told Decrypt. "In addition to clearly informing users that ChatGPT can make mistakes, we're continuously working to improve the accuracy and reliability of our models through a variety of methods." OpenAI advises users to treat ChatGPT responses as first drafts and verify any critical information through reliable sources. It publishes model evaluation data in system cards and a safety evaluation hub. "Hallucinations aren't going away," Northwest AI Consulting co-founder and CEO Wyatt Mayham told Decrypt. "In some cases, like creative writing or brainstorming, hallucinations can actually be useful." And that's exactly the approach Holovaty embraced. "We ended up deciding: What the heck? We might as well meet the market demand," he said. "So we put together a bespoke ASCII tab importer, which was near the bottom of my 'Software I expected to write in 2025' list, and we changed the UI copy in our scanning system to tell people about that feature."
Share
Share
Copy Link
Soundslice, a music learning platform, develops a new feature to import ASCII tablature after ChatGPT mistakenly told users the feature already existed, raising questions about AI's impact on product development.
In an unusual turn of events, music learning platform Soundslice has developed a new feature in direct response to ChatGPT's misinformation. Adrian Holovaty, co-founder of Soundslice, discovered that OpenAI's large language model was incorrectly informing users about a non-existent feature on their platform
1
.Source: TechCrunch
Holovaty noticed an unusual influx of error logs showing users attempting to upload ASCII tablature - a text-based guitar notation format that Soundslice had never supported. After weeks of confusion, he realized that ChatGPT was the source of this misinformation, confidently instructing users to utilize a non-existent feature on Soundslice
2
.Faced with this predicament, Soundslice made an unconventional decision. Instead of merely disclaiming the misinformation, they chose to develop the very feature ChatGPT had fabricated. "We ended up deciding: what the heck, we might as well meet the market demand," Holovaty explained
1
.Source: Decrypt
This incident highlights the ongoing issue of AI models generating false information with apparent confidence, a phenomenon known as "hallucination" or "confabulation"
3
. Since ChatGPT's public release in 2022, numerous instances of AI chatbots presenting false or misleading information as fact have been reported.Holovaty's decision to develop the feature raises intriguing questions about how AI misinformation might influence product decisions. "My feelings on this are conflicted," he wrote. "I'm happy to add a tool that helps people. But I feel like our hand was forced in a weird way. Should we really be developing features in response to misinformation?"
2
Related Stories
Some programmers on Hacker News drew an interesting parallel, comparing the situation to over-eager human salespeople promising features that don't exist, subsequently forcing developers to deliver on those promises
2
.While not directly addressing Holovaty's claims, OpenAI acknowledged that hallucinations remain a concern. "Addressing hallucinations is an ongoing area of research," an OpenAI spokesperson stated. The company advises users to treat ChatGPT responses as first drafts and verify critical information through reliable sources
3
.Source: Ars Technica
This case potentially marks the first documented instance of a company developing a feature in direct response to an AI model's confabulation. As AI continues to evolve and integrate into various aspects of business and technology, it raises important questions about the interplay between AI-generated information and real-world product development.
Summarized by
Navi
[2]
1
Business and Economy
2
Business and Economy
3
Policy and Regulation