2 Sources
[1]
Microsoft-backed AI startup chatbots revealed to be human employees
Builder.ai used real people, not AI. Credit: Stefani Reynolds/Bloomberg via Getty Images A startup promised that their AI assistant would build you an app. But the work was actually done by human engineers. Builder.ai, a startup backed by Microsoft, pitched itself as an AI-powered way to simplify app development. Clients chatted with the platform's signature AI assistant, Natasha, and received a functional, AI-generated app based on the information they provided. But instead of using AI technology to run the chatbot and create the app, the company hired 700 engineers in India to pose as Natasha in conversations with clients, and then to do the actual coding of the app. The company's human-run chatbot operation is part of a larger problem in the tech industry today: An issue called "AI-washing," when tech companies purport that their tools use AI a far greater amount than they actually do. It happens remarkably often, like when Coca‑Cola claimed their 2023 product Y3000 Zero Sugar was co-created with AI, but provided no details on how AI was actually involved in the creation of the product, leaving many to speculate that the claim was designed to get more attention and interest from consumers. As companies scramble to incorporate AI into their offerings -- or at least, give the impression that they have done so -- consumers may not share the tech sector's unfettered enthusiasm for AI everything. The Pew Research Center reports that 43 percent of respondents think AI will harm them, in comparison to just 24 percent who think the tech will benefit them. Moreover, "Public optimism is low regarding AI's impact on work," the Pew report reads. "While 73 [percent]of AI experts surveyed say AI will have a very or somewhat positive impact on how people do their jobs over the next 20 years, that share drops to 23 [percent] among U.S. adults." According to another study, about half of all respondents said they'd rather speak to a real person over AI, in comparison with just 12 percent of respondents who said they preferred to speak with an AI chatbot. A quarter of respondents said it depended on the situation. But AI washing wasn't the problem that got Builder.ai in trouble. According to the Latin Times, a lender seized $37 million from the company after discovering it generated just $50 million in revenue -- 300 percent lower than its $220 million claim. Linas Beliūnas of Zero Hash accused Builder.ai of fraud in a LinkedIn post, writing: "It turns out the company had no AI and instead was just a group of Indian developers pretending to write code as AI." A former employee sued the company, Business Standard reported. An audit seized millions from the company. Now, it owes Amazon $85 million and Microsoft $30 million for cloud services it never paid for. The company filed for bankruptcy in the UK, India, and the U.S. In statement on LinkedIn, Builder.ai wrote that it would be "entering into insolvency proceedings and will appoint an administrator to manage the company's affairs." "Despite the tireless efforts of our current team and exploring every possible option, the business has been unable to recover from historic challenges and past decisions that placed significant strain on its financial position," the LinkedIn post read.
[2]
Microsoft-backed AI start-up exposed as scam business
British AI start-up Builder.ai, once valued at over £15bn and backed by Microsoft, has collapsed after revelations of widespread fraud. The company marketed its service as an AI-powered app development platform, but it turned out that around 700 Indian engineers were manually doing the work behind the scenes. Further investigation revealed that Builder.ai greatly exaggerated its revenues, including by recording fictitious sales and engaging in so-called "round-tripping" transactions with Indian company VerSe Innovation to artificially inflate its figures. These irregularities led to the company filing for bankruptcy in the US in May, with millions of dollars in unpaid debts to Microsoft and Amazon, among others.
Share
Copy Link
Builder.ai, a Microsoft-backed startup claiming to offer AI-powered app development, has been exposed for using human engineers instead of AI. The company has filed for bankruptcy amid fraud allegations and significant debt.
Builder.ai, a startup backed by Microsoft, has been exposed as a fraudulent business, leading to its collapse and bankruptcy filings in multiple countries. The company, once valued at over £15 billion, marketed itself as an AI-powered platform for simplified app development 1. However, investigations revealed that the company's operations were far from the AI-driven solutions it claimed to offer.
Source: Mashable
At the heart of Builder.ai's deception was its signature AI assistant, Natasha. Clients were led to believe they were interacting with an AI chatbot that would generate functional apps based on their inputs. In reality, the company had hired approximately 700 engineers in India to pose as Natasha and manually perform the app development work 1.
The company's fraudulent activities extended beyond its technological claims. Builder.ai was found to have greatly exaggerated its revenues, reporting $220 million when the actual figure was closer to $50 million – a 300% inflation 1. The company allegedly engaged in recording fictitious sales and "round-tripping" transactions with Indian company VerSe Innovation to artificially boost its financial figures 2.
The exposure of Builder.ai's fraudulent practices has led to severe consequences:
Source: GameReactor
Builder.ai's case is part of a larger trend in the tech industry known as "AI-washing," where companies exaggerate or falsely claim the use of AI in their products or services. This practice is becoming increasingly common as businesses attempt to capitalize on the growing interest in AI technology 1.
The Builder.ai scandal comes at a time when public opinion on AI is already mixed. According to a Pew Research Center report:
Another study found that about half of all respondents prefer to interact with a real person rather than an AI chatbot, highlighting the importance of transparency in AI-branded services 1.
Summarized by
Navi
[2]
Meta Platforms is reportedly in discussions to invest over $10 billion in Scale AI, a data labeling startup. This potential deal marks Meta's largest external AI investment and signals a shift in the company's AI strategy.
7 Sources
Business and Economy
15 hrs ago
7 Sources
Business and Economy
15 hrs ago
Upcoming US-China trade talks in London face new challenges as disputes arise over AI technology, rare earth exports, and student visas, threatening the fragile tariff truce reached in Geneva.
5 Sources
Business and Economy
15 hrs ago
5 Sources
Business and Economy
15 hrs ago
OpenAI is aggressively promoting the integration of AI tools, particularly ChatGPT, into various aspects of college life, from personalized tutoring to career assistance, despite ongoing concerns about AI's impact on education.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
A sophisticated cyberattack, potentially linked to Chinese hackers, has targeted smartphones of individuals in government, politics, tech, and journalism, raising concerns about mobile device vulnerabilities and national security.
2 Sources
Technology
15 hrs ago
2 Sources
Technology
15 hrs ago
Meta's Oversight Board has called out the company for insufficient efforts in combating celebrity deepfake scams, highlighting a case involving Brazilian soccer star Ronaldo Nazário and urging for improved content moderation practices.
2 Sources
Technology
2 days ago
2 Sources
Technology
2 days ago