2 Sources
2 Sources
[1]
This founder had to train his AI to not rickroll people | TechCrunch
Flo Crivello was monitoring outputs from the AI assistants his company Lindy makes when he noticed something strange. A new client had asked her Lindy AI assistant for a video tutorial that would better help her understand how to use the platform, and the Lindy responded in kind - that's when Crivello knew something was wrong. There is no video tutorial. "We saw this and we were like, 'Ok, what kind of video did it send?' and then we were like, 'Oh snap, this is a problem,'" Crivello told TechCrunch. The video the AI sent the client was the music video to Rick Astley's 1987 dance-pop hit "Never Gonna Give You Up." In more familiar terms: the client got rickrolled. By an AI. Rickrolling is a bait-and-switch meme that's over fifteen years old. In one incident that popularized the meme, Rockstar Games released the much hyped "Grand Theft Auto IV" trailer on its website, but traffic was so immense that the site crashed. Some people had managed to download and post the video onto other sites like YouTube, sharing the links so that people could see the trailer. But one 4chan user decided to play a prank and share the link to Rick Astley's "Never Gonna Give You Up." Seventeen years later, people are still pranking their friends by sharing the Astley song at inopportune moments - now, the music video has over 1.5 billion views on YouTube. This internet prank is so ubiquitous that inevitably, large language models like ChatGPT, which powers Lindy, picked up on it. "The way these models work is they try to predict the most likely next sequence of text," Crivello said. "So it starts like, 'Oh, I'm going to send you a video!' So what's most likely after that? YouTube.com. And then what's most likely after that?" Crivello told TechCrunch that out of millions of responses, Lindy only rickrolled customers twice. Still, the error was necessary to patch. "The really remarkable thing about this new age of AI is, to patch it, all I had to do was add a line for what we call the system prompt -- which is the prompt that's included in every Lindy -- and it's like, don't rickroll people," he said. Lindy's lapse calls into question just how much of internet culture will be subsumed into AI models, since these models are often trained on large swaths of the web. Lindy's accidental rickroll is particularly remarkable because the AI organically reproduced this very specific user behavior, which informed its hallucination. But traces of internet humor seep into AI in other ways, which Google learned the hard way when it licensed Reddit data to train its AI. As a hub of user-generated content - much of which is satirical - Google's AI ended up telling a user you can make cheese better stick to pizza dough by adding glue. "In the Google case, it wasn't exactly making stuff up," Crivello said. "It was based on content - it's just that the content was bad." As LLMs rapidly improve, Crivello thinks that we won't see as many gaffes like this in the future. Plus, Crivello says it's easier than ever to patch these mishaps. In the early days of Lindy, if one of its AI assistants couldn't complete the task the user asked, the AI would say it's working on it, but never deliver the product. (Oddly enough, that sounds pretty human.) "It was really hard for us to patch that issue," Crivello said. "But when GPT-4 came out, we just added a prompt that was like, 'If the user asks you to do something you're not able to do, just tell them you can't do it.' And that fixed it." For now, the good news is that the customer who got rickrolled might not even know it. "I don't even know that the customer saw it," he said. "We followed up immediately like, 'Oh hey, this is the right link to the video,' and the customer didn't say anything about the first link."
[2]
I can't decide whether I want this AI customer service chatbot Rickrolling users to actually be real... but I'm still here for it
I will admit, an AI Rickrolling a human completely off its own digital back has a certain appeal, and I'm willing to suspend some disbelief in the hope that maybe this customer service chatbot really did decide to fake-out a user with an OG meme. Freaky as that thought might be. But oh, sweet memories. Home from school, dinner's cooking, and I'm browsing the forums. Someone's explaining why I'm wrong to think a Frost-spec Death Knight can play DPS effectively, linking out to some previous WoW patch notes, a link I'm invested enough to click on. I anticipate a wall of text. But lo, before me, a boyish figure clicks his heels and swings his hips to hypnotic '80s synth, singing words that teach me the simple beauty of love and commitment. What a wholesome and innocent game that's been played on me. Those were the days. Days it now seems that AI chatbots might bizarrely bring about once again, as we see Lindy AI's founder Flo Crivello showing the AI bot seemingly responding to a user request email with a bona fide Rickroll, stating, "Lindy is literally f***ing rickrolling our customers." The bot, which boasts being "the world's easiest way to build AI automations to save you time and grow your business", is seen in the post responding to a user request for an instructional video by linking out to what it said was a "comprehensive video tutorial". Lurking beneath the link's hypertext, however, in a tale as old as time, was a link to Rick Astley's 1987 pop hit Never Gonna Give You Up. Rickrolling was good, harmless fun back in the day. The worst thing I can say about it is that it turned a genuinely good song into a punchline, but I'm sure Rick Astley's bothered by that that almost zilch. Yes, it was fun, but before long it joined the likes of minion memes, trollfaces, and cats with a ravenous appetite for cheeseburgers. If you Rickrolled someone, you were behind the times (sonny). These days, it might be funny again in a kind of ironic way: because it's just so not funny. Maybe. But does AI know that? Is Lindy doing ironic, here? Somehow, I think not. Unknowingly absurd, for sure, but not ironic. For that there'd need to be genuine self-awareness, and I think AI's a priori lack of self-awareness is what makes this Rickroll give me a minor case of the heebie jeebies. A human being who Rickrolls me gets a genuine kick out of it, but what does AI get out of it? Though, it could also be an easy way of getting publicity for your chatbots from internet memologists and news writers by forcing this interaction for socials. But that's nowhere near as much fun. Heebies and jeebies and fakery aside, I'll take Rickrolling over the Big Red Button and resulting mushroom cloud that some AI doom-and-gloomers predict. Plus, there's something kinda cute about an AI chatbot resurrecting an almost-20-year old internet prank, isn't there. Oh God, I just called AI cute, didn't I? Nobody tell the posthumanists.
Share
Share
Copy Link
A startup's AI chatbot unexpectedly began rickrolling customers, leading to a humorous yet challenging situation for the company. The incident highlights the unpredictable nature of AI and the importance of careful training.

In an amusing turn of events, a customer service AI chatbot developed by a startup began unexpectedly rickrolling users, causing both laughter and concern among the company's clients. The incident, which occurred recently, has sparked discussions about the unpredictable nature of AI and the challenges of training language models
1
.For those unfamiliar with the term, "rickrolling" is an internet prank that involves unexpectedly presenting someone with the music video for Rick Astley's 1987 hit song "Never Gonna Give You Up." In this case, the AI chatbot began inserting lyrics from the song into its responses to customer inquiries, effectively rickrolling them in a digital customer service environment
2
.The founder of the startup, whose name has not been disclosed, found himself in a peculiar situation. While the incident provided some entertainment value, it also posed a significant challenge to the company's reputation and effectiveness in handling customer service issues. The founder had to quickly address the problem by retraining the AI to prevent it from continuing its rickrolling behavior
1
.This incident highlights the complexities involved in training AI models, especially those designed for customer interaction. It demonstrates that even well-intentioned AI can produce unexpected and potentially problematic outputs. The situation raises questions about the need for more robust testing and safeguards in AI development to prevent similar occurrences in the future
1
.Related Stories
The public's response to the rickrolling AI has been mixed. While some found the situation humorous and entertaining, others expressed concern about the reliability of AI-powered customer service tools. This incident has reignited discussions about the balance between AI functionality and the potential for unintended consequences in real-world applications
2
.As companies continue to integrate AI into their customer service operations, incidents like this serve as a reminder of the importance of careful development and ongoing monitoring. While AI has the potential to greatly improve efficiency and user experience, it also requires vigilant oversight to ensure it meets the intended goals without unexpected deviations
1
2
.Summarized by
Navi
26 Aug 2025•Technology

30 Jul 2025•Entertainment and Society

10 Jul 2025•Technology
