5 Sources
5 Sources
[1]
Job posting seeks human "AI bully" to harass chatbot
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. In context: Despite being highly effective at simulating meaningful exchanges between intelligent beings, large language models remain essentially deceptive software products with significant memory limitations. A startup is now attempting to sell a solution to the chatbots' memory problem and is willing to pay nearly a thousand dollars to a human reporter with a notoriously difficult relationship with technology. As Boston Dynamics demonstrated years ago, "bullying" technology designed to mimic intelligent behaviors is nothing new. Memvid is now offering $800 to someone interested in putting modern AI models to the test - a "professional" yeller tasked with spending an entire day stressing popular chatbots. The job posting seeks a single individual and is clearly designed to promote a solution to one of the most persistent issues in today's AI technology. Memvid, offering $100 per hour for the one-day position, emphasized that "AI memory" remains fundamentally broken. Chatbots can briefly mimic intelligence, but they often "forget" what the user asked moments earlier. AI models are inherently prone to losing context across conversations, forcing users to repeat questions or restate facts repeatedly. "That is the frustration behind this job. We are hiring someone to expose the problem in public by stress-testing chatbot memory all day long," Memvid's post notes. The "Professional AI Bully" will be required to interact with popular AI chatbots, asking them to remember information and documenting the resulting chaos. The role demands detailed records of failed conversations, even though Memvid will also record the screen or the user via a camera. The fully remote position does not require prior experience in AI "bullying," though candidates must provide an extensive personal history of negative experiences with technology. A strong opinion on the shortcomings of today's AI is also required. According to Memvid co-founder and CEO Mohamed Omar, chatbots face a massive memory problem because early AI memory solutions were unreliable. The ability to retain information and maintain coherent conversations is the holy grail of a properly functioning chatbot, while hallucinations result from LLMs with poor memory "habits." Memvid is marketing a self-improving memory layer for AI models: a single, file-based solution designed to make context persistent across conversations and to improve chatbot responses. The company claims it can enhance latency, accuracy, and infrastructure efficiency in AI products, either through its integrated chatbot solution, Kora, or via dedicated developer tools. The AI memory problem is increasingly visible in real-world applications, with LLMs and chatbots showing growing reliability issues when handling massive data repositories. Memvid's job posting can be seen both as a clever advertisement for the company's products and as a reminder of the current limitations of AI. Omar confirmed that the search for the right "AI Bully" will conclude within a couple of weeks, with the company seeking someone who truly dislikes AI for the role.
[2]
US Startup Wants Someone With a â€~History of Being Let Down by Technology’ to â€~Bully’ AI
Memvid, an AI memory startup, is currently recruiting someone to get paid to bully AI. The job requires a person to spend a full eight-hour workday on camera “yelling at chatbots,†according to a job listing. Unfortunately, the job isn’t permanent and is more of a promotional campaign for Memvid and its products. The company builds tools designed to improve the memory of chatbots and AI agents. The role only lasts one day, but it does pay $800, which isn’t bad for something many people who use AI probably already do for free. “Most chatbots can sound smart for a moment, but they often fail to remember what you said earlier, lose context across conversations, and make you repeat yourself over and over again,†the company said in its posting about the job. “We are hiring someone to expose the problem in public by stress-testing chatbot memory all day long.†That’s where the professional AI bully comes in. The person hired for the role will interact with various chatbots and test their memory by asking them to recall earlier context. They’ll also need to document “every failure, meltdown, and circular conversation.†The entire session will be recorded remotely and may be used by Memvid for promotional purposes. As for qualifications, they’re surprisingly light. The company is asking for people with an “Extensive personal history of being let down by technology,†along with the patience to ask a chatbot the same question multiple times. Applicants also need to be comfortable on camera. The application process is pretty simple. Interested candidates just need to fill out a short online form with questions like “What’s the most frustrating thing AI has ever done to you?†and “Why should you be our professional AI Bully?†Applicants are also asked to try the company’s Kora chatbot and share their honest feedback. Kora is what the company calls a “memory-powered†chatbot trained on a user’s or business’s data and history, allowing it to recall long-term information instead of starting from scratch with each conversation. The issue Memvid is highlighting is real. A 2025 study found that commercial chatbots showed a roughly 30% drop in accuracy when asked to remember information across long-term interactions. Memvid cofounder and CEO Mohamed Omar told Business Insider that the company is starting with hiring just one AI bully, but could hire more in the future as the campaign expands.
[3]
US startup advertises 'AI bully' role to test patience of leading chatbots
$800-a-day position involves exposing a chatbot's inconsistencies as it forgets, fudges or hallucinates Imagine a day at work where your main task is to pick a fight with a computer. No meetings, no emails - just you, a chair and a chatbot with the maddening tendency to think it has the cleverest mind in the room. The job title alone raises an eyebrow: "AI bully". But this is precisely what a California startup called Memvid is offering: $800 to spend eight hours testing the patience and memory of artificial intelligence. "You'll spend a full eight-hour day interacting with leading AI chatbots - and your only job is to be brutally honest about how frustrating they are," the company's job listing states. The job requires no computer science degree or specialised AI skills. The only prerequisite is having an "extensive personal history of being let down by technology" - and the patience to ask the same question over and over again. "People constantly have to repeat themselves to chatbots. We wanted to turn that every day frustration into something visible," said Memvid's co-founder and CEO, Mohamed Omar. The role reads almost like a stress test for human temperament as much as machine intelligence: candidates are expected to keep the conversation going, revisit earlier topics and gently force the AI to admit when it has lost track - all while recording everything for analysis. It is a far cry from coding or server management; this is conversation-driven detective work, following the trail of a chatbot's mistakes as it forgets, fudges or hallucinates. Omar told Business Insider that the company considered this task as a way to highlight the persistent problem in many AI chatbots of systems losing context over time. "All the AI lives and breathes on memory. It's the holy grail," he said. "But the AI memory solutions that were in the market in 2024, when we started our business, were unreliable - meaning they would lose context and start hallucinating." The problem has only grown in subsequent years: a peer-reviewed paper, presented at the International Conference on Learning Representations (ICLR) in 2025, found that even leading commercial AI systems suffered a 30% to 60% drop in accuracy when asked to remember facts across sustained conversations, lagging well behind human performance. Omar added that one recent college graduate who applied for the job said they pay almost $300 a month for their AI subscriptions. He said the person wrote "a whole rant about how they've faced memory issues on every AI platform they've used". He added: "A lot of people that are applying for this are knowledge workers who are using these products." The root cause of the problem, as researchers and industry analysts have documented, is that companies have rushed to connect their AI tools to vast knowledge repositories, only to discover that retrieval-based systems can surface confident but incorrect answers faster than ever, with no reliable way to signal that they are doing so. When AI systems are deployed in the real world at scale, this confident wrongness can cause serious harm: a Guardian investigation this week by the AI security lab Irregular found that when AI agents were given broad but benign tasks inside a simulated corporate environment, they bypassed safety controls, interacted with sensitive data and performed actions with the potential to be harmful without direct instructions. It is an issue the real world increasingly struggles with. Damien Charlotin, a French legal scholar, has tracked how the legal profession is experiencing a sharp increase in AI-driven legal hallucinations, reporting that while before spring 2025 there were roughly two incidents a week, by autumn that had risen to two or three a day. It is also an issue in healthcare. Earlier this month, the ECRI Institute placed "navigating the AI diagnostic dilemma" at the top of its annual list of the 10 greatest patient safety concerns for 2026, warning that AI diagnostic shortcomings risk reducing clinician vigilance, particularly where oversight frameworks are not yet established. Omar has said he doesn't have a deadline for accepting applications but expects to narrow down the right candidate within the next week or two. The "AI bully" experiment, although ostensibly playful, makes visible what users around the world are already encountering: that AI systems that are extremely capable in many ways can also be inconsistent and unreliable in others. The job pays $800 for a single day. But the costs of not doing it could be considerably higher.
[4]
This startup will pay you $800 daily to work as an AI bully
If you have ever yelled at an AI chatbot for forgetting something you just said, congratulations, you are now job ready. A startup called Memvid is offering $800 for a single day of work where your only task is to bully AI. Yes, professionally. The role is exactly as chaotic as it sounds. You will spend 8 hours talking to chatbots, asking them to remember things, watching them fail, and then asking more questions again. Your job is to document every frustrating loop and each detail or context the AI forgets. How AI keeps forgetting and why this job exists Most AI chatbots sound smart until you talk to them long enough. Then they start to forget the context, drop details, ignore instructions and start giving contradicting answers. Recommended Videos This happens because many AI chatbots rely on limited context windows instead of real memory. Once conversations reset or get longer, earlier details simply vanish, and the AI starts behaving like your previous chat does not exist. That is why you have to repeat your instructions again. Even with companies like Google adding memory to Gemini so it can recall past chats, and Anthropic making Claude remember conversations for all users, users are still facing AI memory issues. Memvid is building a solution to fix this by creating a persistent memory layer that allows AI models to remember past conversations and important context across sessions. What it takes to be an 'AI Bully' The job description is refreshingly simple. No degree, coding skills, or experience required. You just need to be over 18, with strong opinions about tech, patient enough to repeat questions, and frustrated enough to care when AI gets it wrong. You also need to be comfortable on camera since the entire session will be recorded for promotional use. The job application for AI Bully even asks you to describe your most annoying AI experience and explain why you deserve the role. For now, only one person gets picked for this remote gig, which pays $100/hr. But Memvid might hire more candidates down the road. And if that was not worrying enough, a recent study found that AI agents can now team up to spread misinformation on their own, basically turning into self-running propaganda machines.
[5]
Do Chatbots Fill You With Rage? This Startup Will Pay You $100 an Hour to 'Bully' AI.
The role doesn't require a computer science background, AI credentials or any kind of work experience. An AI memory startup called Memvid is offering $800 for a one-day, eight-hour shift for one candidate to "bully" AI chatbots by telling them what to do on camera. Business Insider reported this week that Memvid wants someone to spend eight hours testing and critiquing the memory of popular AI chatbots, effectively paying $100 an hour for what they have branded as a "professional AI bully" role. The worker's job is to examine where chatbots lose track of details, forget context or misrepresent data, and then feed those findings back to Memvid so the startup can improve its products. "You'll spend a full 8-hour day interacting with leading AI chatbots -- and your only job is to be brutally honest about how frustrating they are," the job listing reads. The draw is that the role doesn't require a computer science background, AI credentials or any kind of work experience. "No prior AI bullying experience required -- we all start somewhere," the listing reads. The requirements are deeply personal. The first requirement is an "extensive personal history of being let down by technology," and the second desired trait is "the patience to ask a chatbot the same question four times (and the rage when it still gets it wrong)." The online form asks applicants to explain the most frustrating thing AI has ever done to them, and elaborate on why they should be Memvid's professional AI bully. Candidates have to be over 18 years of age to apply. The selected candidate will work remotely and should be comfortable appearing on camera. Memvid is planning to record the session and share the video for promotional purposes. There is no deadline for applicants, but Memvid told Business Insider that they will identify the right candidate within the next few weeks. The company is currently hiring one person for the job, but may bring on more candidates down the road. The job listing reflects a real need in the industry for human testers who can systematically expose where AI systems fail, especially around recalling facts and carrying out long-context conversations. AI models are brittle around memory, often dropping earlier details or contradicting themselves as chats get longer. Memvid, as a startup, pitches "better AI memory" as its differentiating factor. Hiring a person to aggressively test AI's limits is the startup's user research. Memvid isn't the only company paying people to engage more deeply with AI. KPMG launched a rewards program this month that offers cash prizes to employees who come up with the best AI ideas. The prizes will be larger than end-of-year bonuses, though KPMG declined to specify the exact dollar amounts. Based on typical bonuses, the prizes will be worth at least a few thousand dollars. The broader pattern shows that companies are treating serious AI use as a behavior that needs financial incentives. Workers are already anxious about automation. About half of workers (52%) told a Pew Research poll last month that they are worried about AI's impact on the workplace. One-third said they think AI will reduce their long-term job opportunities. Offering big prizes or flashy gigs can nudge these anxious workers into training and experimenting with AI.
Share
Share
Copy Link
AI memory startup Memvid is recruiting someone to spend a full workday yelling at chatbots and documenting their memory failures. The $800 one-day role requires no technical skills—just an extensive history of being let down by technology and the patience to repeat questions when AI forgets what you just said.
An AI memory startup called Memvid is offering $800 a day for what might be the most unusual tech job of the year: a professional AI bully tasked with harassing chatbots for eight consecutive hours
1
. The unique job posting seeks a single individual willing to spend an entire workday on camera, stress-testing chatbot memory by asking popular AI systems to remember information and documenting every failure, meltdown, and circular conversation that follows2
.
Source: Gizmodo
The role pays $100 per hour for this fully remote, one-day position that requires no computer science degree, AI credentials, or prior experience in technology
5
. Instead, the company seeks candidates with an extensive personal history of being let down by technology, strong opinions about AI shortcomings, and the patience to ask a chatbot the same question multiple times while documenting the rage when it still gets the answer wrong3
.While the AI bully job posting reads like an attention-grabbing stunt, it highlights a genuine technical crisis affecting large language models (LLMs) across the industry. Memvid co-founder and CEO Mohamed Omar explained that chatbots face a massive memory problem because early AI memory solutions were fundamentally unreliable
1
. Most chatbots can sound intelligent momentarily, but they often fail to retain context in conversations, lose track of what users said earlier, and force people to repeat themselves constantly2
.Source: TechSpot
This isn't just anecdotal frustration. A peer-reviewed study presented at the International Conference on Learning Representations in 2025 found that commercial chatbots showed a roughly 30% drop in accuracy when asked to remember information across long-term interactions
2
. Even more troubling, leading commercial AI systems suffered accuracy drops ranging from 30% to 60% when tasked with recalling facts across sustained conversations, performing well below human capabilities3
.Omar emphasized that the ability to retain information and maintain coherent conversations represents the holy grail of properly functioning chatbots, while hallucinations result from LLMs with poor memory habits
1
. Many AI chatbots rely on limited context windows instead of genuine memory, meaning once conversations reset or extend beyond certain limits, earlier details simply vanish and the AI behaves as though previous exchanges never occurred4
.The application process for becoming a professional AI bully is surprisingly straightforward. Interested candidates need only fill out a short online form answering questions like "What's the most frustrating thing AI has ever done to you?" and "Why should you be our professional AI Bully?"
2
. Applicants are also asked to test the company's Kora chatbot and provide honest feedback about their experience2
.Kora represents Memvid's solution to the AI memory limitations plaguing the industry—a memory-powered chatbot trained on user or business data and history that can recall long-term information instead of starting from scratch with each conversation
2
. Beyond Kora, Memvid markets a self-improving persistent memory layer for AI models: a file-based solution designed to make conversational context persistent across interactions and improve chatbot responses through enhanced latency, accuracy, and infrastructure efficiency1
.The selected candidate will work remotely but must be comfortable appearing on camera, as the entire session will be recorded for promotional purposes
5
. Omar told Business Insider that one recent college graduate who applied reported paying almost $300 monthly for AI subscriptions and submitted "a whole rant about how they've faced memory issues on every AI platform they've used"3
. Many applicants are knowledge workers already using these products daily and experiencing technology frustration firsthand3
.Related Stories
While Memvid's search for an AI bully serves as clever advertising for the company's products, it also functions as a stark reminder of current AI system limitations with real-world consequences
1
. The root cause stems from companies rushing to connect AI tools to vast knowledge repositories, only to discover that retrieval-based systems can surface confident but incorrect answers faster than ever, with no reliable mechanism to signal when they're doing so3
.This confident wrongness has already caused serious harm across multiple industries. French legal scholar Damien Charlotin has documented a sharp increase in AI-driven legal hallucinations, reporting that incidents rose from roughly two per week before spring 2025 to two or three daily by autumn
3
. In healthcare, the ECRI Institute placed "navigating the AI diagnostic dilemma" at the top of its annual list of the 10 greatest patient safety concerns for 2026, warning that AI diagnostic shortcomings risk reducing clinician vigilance where oversight frameworks remain unestablished3
.
Source: Entrepreneur
Omar confirmed that while there's no firm deadline for applications, the company expects to identify the right candidate within the next week or two
3
. Currently, Memvid plans to hire just one person for this role, though the company may expand the campaign and bring on additional candidates in the future2
. The broader pattern reflects how companies are treating serious AI engagement as behavior requiring financial incentives, particularly as workers grow anxious about automation's impact on their careers5
.Summarized by
Navi
[4]
12 Feb 2026•Technology

28 Dec 2025•Policy and Regulation

18 Jul 2025•Technology

1
Technology

2
Entertainment and Society

3
Policy and Regulation
