Curated by THEOUTPOST
On Wed, 24 Jul, 4:04 PM UTC
5 Sources
[1]
Lakera, which protects enterprises from LLM vulnerabilities, raises $20M | TechCrunch
Lakera, a Swiss startup that's building technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European venture capital firm, Atomico. Generative AI has emerged as the poster child of the burgeoning AI movement, driven by popular apps such as ChatGPT. But it remains a cause for concern within enterprise settings, largely due to issues around security and data privacy. For context, large language models (LLMs) are the engines behind generative AI and enable machines to understand and generate text just like a human. But whether you want such an application to write a poem or summarize a legal contract, it needs instructions to guide its output. These "prompts," however, can be constructed in such a way as to trick the application into doing something it's not supposed to, such as divulging confidential data that was used to train it, or give unauthorized access to private systems. Such "prompt injections" are a real and growing concern, and are specifically what Lakera is setting out to address. Founded out of Zurich in 2021, Lakera officially launched last October with $10 million in funding, with the express promise to protect organizations from LLM security weaknesses such as data leakage or prompt injections. It works with any LLM, including OpenAI's GPT-X, Google's Bard, Meta's LLaMA, and Anthropic's Claude. At its core, Lakera is pitched as a "low-latency AI application firewall" that secures traffic into and out of generative AI applications. The company's inaugural product, Lakera Guard, is built on a database that collates insights from myriad sources, including publicly available "open source" data sets such as those hosted on Hugging Face, in-house machine learning research, and a curious interactive game it developed called Gandalf, which invites users to attempt to trick it into revealing a secret password. The game gets more sophisticated (and thus more difficult to "hack") as the levels progress. But these interactions have enabled Lakera to build what it calls a "prompt injection taxonomy" that separates such attacks into categories. "We are AI-first, building our own models to detect malicious attacks such as prompt injections in real time," Lakera's co-founder and CEO David Haber explained to TechCrunch. "Our models continuously learn from large amounts of generative AI interactions what malicious interactions look like. As a result, our detector models continuously improve and evolve with the emerging threat landscape." Lakera says that by integrating their application with the Lakera Guard API, companies can better safeguard against malicious prompts. However, the company has also developed specialized models that scan prompts and application outputs for toxic content, with dedicated detectors for hate speech, sexual content, violence and profanities. "These detectors are particularly useful for publicly-facing applications, for example chatbots, but are used in other settings as well," Haber said. Similar to its prompt defense toolset, companies can integrate Lakera's content moderation smarts with a single line of code, and can get access to a centralized policy control dashboard to fine-tune the thresholds they want to set according to the content type. With a fresh $20 million in the bank, Lakera is now primed to expand its global presence, particularly in the U.S. The company already claims a number of fairly high-profile customers in North America, including U.S.-based AI startup Respell as well as Canadian mega-unicorn Cohere. "Large enterprises, SaaS companies and AI model providers are all racing to roll out secure AI applications," Haber said. "Financial services organizations understand the security and compliance risks and are early adopters, but we are seeing interest across industries. Most companies know they need to incorporate GenAI into their core business processes to stay competitive." Aside from lead backer Atomico, Lakera's Series A round included participation from Dropbox's VC arm, Citi Ventures and Redalpine.
[2]
Lakera AI raises $20M to ward off malicious prompts on generative AI models - SiliconANGLE
Lakera AI raises $20M to ward off malicious prompts on generative AI models Generative artificial intelligence threat detection startup Lakera AI AG, which is notable for creating one of the world's largest "AI red-teams" for testing generative AI models, said today it has closed on a $20 million early-stage round of funding. Today's Series A round was led by the European venture capital firm Atomico and saw participation from Citi Ventures, Dropbox Ventures and Redalpine, bringing the startup's total funding to $30 million. Zurich, Switzerland-based Lakera is the creator of a cloud-based service called Lakera Guard that helps companies spot when an individual tries to enter malicious prompts into a large language model. It's designed to protect against prompts that aim to trick AI models into generating harmful outputs, and prevent attempts to extract sensitive information from them. In addition, Lakera has a second product called MLTest, which is designed for developers that need to test the performance of their computer vision models before they deploy them in production. MLTest can be integrated within developer's continuous integration/continuous deployment workflows and automatically scan AI code for any issues they need to know about. The startup believes that demand for its tools is set to grow rapidly, citing data from Gartner Inc. that shows how 80% of all enterprises will have generative AI applications up and running in production by 2026, compared with less than 5% in 2023. Those enterprises will need a platform such as Lakera's because cybersecurity remains one of the biggest challenges for AI practitioners, mainly because traditional security tools are not designed for this new technology. What companies need are generative AI-specific security tools, it says. Lakera says generative AI poses a number of risks, with one of the biggest being the danger of malicious prompts. These are dangerous because anyone can try to trick an LLM into saying something compromising or harmful, because there's no need to write any code. LLMs understand natural language, and that opens the door for anyone to try to use them to get access to confidential data or cause mischief. Other threats include what Lakera calls "AI sleeper agents," which refers to the more sophisticated jailbreaking techniques used by coders to compromise AI models. There's also a growing danger around "AI-targeted worms," it says, which are designed to bypass security measures and harvest confidential information stored within AI systems. Lakera founder and Chief Executive David Haber says that old cybersecurity techniques simply aren't effective enough for generative AI applications. "Enterprises now operate in a world where anyone who knows how to talk knows how to hack," he pointed out. "Security solutions need to change but they can't get in the way of user experience." Lakera says its generative AI security tools have several benefits. Perhaps the most significant of these is that its cloud tools are powered by a repository of tens of millions of data points about AI-focused cyberattacks. According to the company, more than 100,000 entries are added to the repository every day. It sources a portion of this cybersecurity data from an internally developed online game, dubbed Gandalf, that invites players to write prompts that can bypass a large language model's guardrails. With more than 250,000 users, Lakera claims that Gandalf is effectively the world's largest "red-team." Red-teaming refers to a coordinated effort to discover flaws and vulnerabilities within AI systems. Another advantage of Lakera's tools, the company says, is that they're easy to deploy. It has created an application programming interface that enables Lakera Guard and MLTest to be deployed into developer's workflows with a single line of code. Atomico Partner Sasha Vidoborskiy said he invested in Lakera because it has already shown "impressive commercial pull," winning over customers such as Dropbox Inc. and one of the largest banks in the U.S. since its last funding round in October. "More than 35% of Fortune 100 companies are knocking on their door [because] they have an urgency to deploy generative AI applications into production but can't do it without protection in place," Vidoborskiy said. Lakera said the money from today's round will go toward expanding its presence in the U.S. and accelerating its product development efforts.
[3]
Exclusive: Lakera snags $20 million to prevent business Gen AI apps from going haywire and revealing sensitive data
It's the potential nightmare that haunts Fortune 500 company leaders working to develop chatbots and other generative AI applications: Hackers figure out how to trick their AI into revealing sensitive corporate or customer data. Lakera, a startup based in Zurich, Switzerland, announced today it has raised $20 million to help those leaders sleep peacefully. European VC Atomico led the funding round, with participation from Citi Ventures, Dropbox Ventures, and existing investors including Redalpine, bringing Lakera's total funding to $30 million. The company did not disclose its valuation in the latest fundraising. Lakera's platform, which is used by Dropbox, Citi, and a number of Fortune 100 tech and finance companies, lets companies set their own guardrails and boundaries around how a generative AI application can respond to prompts featuring text, images, or video. The technology is supposed to protect against the most widely used method of hacking into generative AI models, known as "prompt injection attacks," in which hackers manipulate generative AI to access a company's systems, steal confidential data, take unauthorized actions, and generate harmful content. Most Fortune 500 companies hope to put generative AI to work over the next two years, said Lakera CEO David Haber. Those businesses typically use off-the-shelf models like the one powering OpenAI's ChatGPT. Then, they build applications on top of that model -- a customer service chatbot, for example, or a research assistant -- that is connected to a company's sensitive data and integrated into business-critical functions. Safety and security must therefore be a top priority. "Existing security teams are facing completely new challenges in securing these Gen AI applications," Haber said. "We are processing everything that goes in and everything that comes out, and what we ultimately make sure is that these highly-capable generative AI applications do not take any unintended actions." He added that Lakera's platform is built on the company's own internal AI models -- not off-the-shelf options. "You can't be using ChatGPT to secure ChatGPT -- terrible idea." But the most important thing, Haber emphasized, is that customers can specify the context of what the Gen AI applications can and can't do, and assess any possible security issues, in real time. Customers can also implement concrete policies around what a chatbot can talk about, he said. For example, a company might not want it to discuss competitors or reveal any financial data. Haber said Lakera has one unique advantage in tracking AI threats: Gandalf, its online AI security game that has millions of users worldwide, including Microsoft (which uses it for security training). As users test their prompt injection skills with Gandalf's AI 'jailbreaking' game, the tool generates a real-time database of AI threats, which the company says is growing by tens of thousands of "uniquely new attacks every day," and helps keep Lakera's software up to date. Lakera plays in a competitive Gen AI security landscape along with other startups like HackerOne and BugCrowd. But Matt Carbonara, of Citi Ventures, said the Lakera team "has the background to build and evolve this product the market needs," adding that he liked its focus on prompt injection attacks. "When you have new attack surfaces, you need new countermeasures," he said. "The prompt injection attack approach is the first place people will be focused."
[4]
Lakera Raises $20 Million to Provide Security for GenAI Applications
Lakera has raised $20 million in a Series A funding round to expand its offerings in the real-time generative artificial intelligence (AI) applications security category. This round brings the firm's total funding to $30 million, and the firm will use the new funding to accelerate its product development and go-to-market strategy, Lakera said in a Wednesday (July 24) press release. "In less than two years it's estimated that 80% of enterprises will have deployed GenAI applications in production environments," the company said in the release. "The challenge is to secure these applications against AI-specific risks." Lakera said in the release that generative AI has introduced new attack methods, such as prompt and data poisoning attacks, that hackers can use to steal confidential information or trigger inappropriate behavior from the model. So, enterprises must put in place guardrails to ensure the model cannot be manipulated, according to the release. "Lakera provides real-time GenAI security so that the most powerful technology of our lifetime cannot be tricked into taking unintended actions," the company said. The company also uses an AI-first approach to stay ahead of continuously evolving threats, uses an ultra-low latency application programming interface (API) to ensure that its security solution doesn't compromise the user experience, and enables its customers to centralize their AI security by inserting a single API call in their applications, per the release. Lakera's latest funding round was led by Atomico, with participation from Citi Ventures, Dropbox and its existing investors, according to the release. Donald Tucker, head of corporate development and ventures at Dropbox, said of Lakera: "Their advanced technology is helping companies like Dropbox safeguard against vulnerabilities these new technologies pose." The National Security Agency (NSA) said in April that as AI is increasingly integrated into business operations, these systems are particularly vulnerable to cyberattacks. "AI brings unprecedented opportunity but also can present opportunities for malicious activity," NSA Cybersecurity Director Dave Luber said in a press release.
[5]
Lakera, a Swiss cybersecurity for generative AI startup, just raised $20 million using this 10-slide pitch deck
This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Log in. Lakera, a Swiss cybersecurity startup designed for generative AI, has raised $20 million in funding. Lakera's AI platform, Gandalf, is used by companies like Microsoft to help assess and manage prospective security risks generated by AI. The company, founded in 2021 and based in Zurich, says it has identified risks associated with generative AI, including prompt attacks using large language models and AI "sleeper agents" capable of jailbreaking, sometimes with just one image.
Share
Share
Copy Link
Swiss startup Lakera raises $20 million in Series A funding to protect businesses from generative AI security risks. The company's technology aims to prevent malicious prompts and ensure safe deployment of AI models.
Lakera, a Swiss artificial intelligence security startup, has successfully raised $20 million in a Series A funding round led by Felicis, with participation from Picus Capital and existing investors 1. The company, founded in 2022, specializes in protecting enterprises from vulnerabilities associated with large language models (LLMs) and other generative AI technologies 2.
As generative AI models become increasingly prevalent in business operations, concerns about their security vulnerabilities have grown. Lakera's technology aims to safeguard these AI systems against potential threats, including malicious prompts that could compromise sensitive information or manipulate AI outputs 3.
The company offers two main products: Guardrails and Inspector. Guardrails is designed to protect LLMs from harmful inputs and prevent the leakage of sensitive information. Inspector, on the other hand, focuses on evaluating the security of AI models before they are deployed 4.
Since its inception, Lakera has experienced significant growth, with its customer base expanding from 5 to 45 within a year. The company's clients include major corporations such as Volkswagen and Booking.com, as well as government institutions like the U.S. Air Force 5.
With the new funding, Lakera plans to expand its team and further develop its AI security solutions. The company aims to address the growing demand for robust security measures in the rapidly evolving field of generative AI. As businesses increasingly adopt AI technologies, the market for AI security solutions is expected to grow significantly 1.
Lakera's innovative approach to AI security has garnered attention from major players in the tech industry. The company has established partnerships with leading AI companies, including OpenAI and Anthropic, further validating its position in the market 2.
As concerns about AI security continue to grow, Lakera's solutions could play a crucial role in accelerating the safe adoption of generative AI technologies across various industries. By providing robust security measures, the company aims to instill confidence in businesses looking to leverage the power of AI while mitigating associated risks 3.
Reference
[2]
Prompt Security, a startup specializing in generative AI security, has secured $18 million in Series A funding to advance its mission of protecting enterprises from AI-related threats while providing visibility and governance over AI tools.
2 Sources
2 Sources
Harmonic Security, a startup focused on protecting sensitive data from leaking through AI use, has secured $17.5 million in Series A funding. The company aims to address the growing concern of data exposure in enterprise AI applications.
2 Sources
2 Sources
AIceberg, a startup specializing in AI trust, safety, security, and compliance technology, has raised $10 million in seed funding and launched a new AI trust platform. The platform aims to provide enterprise-grade security with real-time, automated validation of AI application traffic.
2 Sources
2 Sources
Kindo, a startup focused on secure enterprise AI management, has raised $20.6 million in funding. The company aims to address the growing security concerns surrounding AI adoption in businesses.
2 Sources
2 Sources
As AI technologies advance, cybersecurity faces new challenges and opportunities. This story explores the intersection of AI and cybersecurity, highlighting NVIDIA's role and the broader implications for system protection in the age of generative AI.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved