The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 3 Apr, 8:01 AM UTC
6 Sources
[1]
OpenAI just made its first cybersecurity investment | TechCrunch
Generative AI has vastly expanded the toolkit available to hackers and other bad actors. It's now possible to do everything from deepfaking a CEO to creating fake receipts. OpenAI, the biggest generative AI startup of them all, knows this better than anyone. And it's just invested in another AI startup that helps companies defend against these kinds of attacks. New York-based Adaptive Security has raised a $43 million Series A co-led by OpenAI's startup fund and Andreessen Horowitz, it announced Wednesday. This marks OpenAI's first investment in a cybersecurity startup, OpenAI confirmed to TechCrunch. Adaptive Security simulates AI-generated 'hacks' to train employees to spot these threats. You might pick up the phone to listen to the voice of your CTO asking for a verification code. That wouldn't be your actual CTO, but a spoof generated by Adaptive Security. Adaptive Security's platform doesn't just spoof phone calls: it also covers texts and emails, while scoring which parts of a company might be most vulnerable and training staff to spot the risks. The startup focuses on hacks that require a human employee to do something they're not supposed to, like click on a bad link. These kinds of 'social engineering' hacks, while basic, have led to huge losses-think of Axie Infinity, which lost over $600 million due to a fake job offer for one of its developers in 2022. AI tools have made social engineering hacks easier than ever, co-founder and CEO Brian Long told TechCrunch. Launched in 2023, Adaptive now has over 100 customers, with Long saying positive feedback from them helped attract OpenAI to the cap table. It doesn't hurt that Long is a veteran entrepreneur with two previous successes: mobile ad startup TapCommerce, which he sold to Twitter in 2014 (reportedly for over $100 million) and ad-tech firm Attentive, which was last valued at over $10 billion in 2021 according to one of its investors. Long told TechCrunch that Adaptive Security will use its latest funding mostly on hiring engineers to build out its product and keep up in the AI "arms race" against bad actors. Adaptive Security joins a long list of other cyber startups working on the boom in AI threats. Cyberhaven just raised $100 million at a $1 billion valuation to help stop staff from putting sensitive info in tools like ChatGPT, Forbes reported. There's also Snyk, which partly credits the rise of insecure AI-generated code for helping push its ARR north of $300 million. And deepfake detection startup GetReal just raised $17.5 million last month. As AI threats become more sophisticated, Long has one simple tip for company employees worried about getting their voice cloned by hackers. "Delete your voicemail," he recommends.
[2]
OpenAI backs deepfake cybersecurity startup Adaptive Security in new funding round
AI-powered cyberattacks: Adaptive Security CEO Brian Long on the dangers of AI Artificial intelligence cybersecurity company Adaptive Security announced Wednesday that it closed a $43 million funding round in March, co-led by OpenAI and Andreessen Horowitz. The investment marks OpenAI's first outside backing of a cybersecurity firm. Adaptive Security simulates AI-powered attacks against companies, specializing in training for deepfake attacks and other phishing threats. "The technology is getting better and better every day," CEO Brian Long said on CNBC's "Squawk Box." "It's not just voice and likeness, it's trained in all of the opensource information out there about you." The company uses data and AI learning to simulate attacks that go beyond just imitating an individual's voice - like most modern sophisticated phishing attacks, the technology uses details about someone's family, background, personality and more.
[3]
OpenAI just made its first major cybersecurity investment
ChatGPT maker OpenAI has backed a security start-up in a sign the company might be about to focus more heavily on cyber protections. With generative AI enabling more sophisticated cyberattacks like deepfakes and phishing content, OpenAI has revealed a major supporting role for Adaptive Security. The New York-based startup helps companies defend against AI-powered cyberattacks, and already counts over 100 enterprise clients on its books since publicly launching in January 2023. "I'm thrilled to announce our $43 million investment round, led by Andreessen Horowitz (a16z) and the OpenAI Startup Fund - marking OpenAI's first-ever investment in a cybersecurity company," shared Adaptive's CEO and co-founder Brian Long. Long noted that social engineering continues to be the leading cause of security breaches, but artificial intelligence has enabled attackers to "[increase] the efficacy and blast radius of these attacks." It doesn't just come down to the creation of content, because AI can now imitate colleagues and business leaders to "make phone calls, send emails or text your team." For decades, policymakers, leaders and individuals from all walks of life have been split on whether we can fight fire with fire, but Long believes AI can be the solution to AI-enhanced threats: "With the right models and data, we can simulate realistic AI attacks, train employees to recognize threats, triage suspicious behavior in real time, and surface risk before it turns into loss." Demonstrating how realistic deepfakes can be, Long even has a deepfake of himself on the website that customers can interact with. I asked the fake Brian to tell me about the company: "We offer a suite of next-generation cybersecurity training and simulation products designed to protect organizations from advanced threats," he said. It's unclear how much money OpenAI invested individually, but the fact that the AI giant is now getting involved with cybersecurity - not least, relating to its own products' impact - is especially noteworthy.
[4]
Adaptive Security raises $43M for its AI cyberattack simulation platform - SiliconANGLE
Adaptive Security raises $43M for its AI cyberattack simulation platform Adaptive Security Inc., a startup that helps companies test their cybersecurity posture using simulated social engineering attacks, has raised $43 million in fresh funding. Andreessen Horowitz and the OpenAI Startup Fund led the Series A round. Adaptive Security detailed in its funding announcement today that more than a half dozen other backers chipped in well. The group included executives from Google LLC, Workday Inc. and other major tech firms. Installing malware is not the only way that cybercriminals can gain access to a company's systems. Many hacking campaigns use social engineering to trick employees into divulging passwords or sensitive data. Over the past few years, artificial intelligence has made it considerably easier for hackers to launch such attacks. Adaptive Security provides a platform that helps companies reduce the risk posed by AI-powered social engineering campaigns. It can launch a simulated cyberattack and detect when employees respond. From there, the software provides personalized cybersecurity training to those employees in order to avoid future breaches. Adaptive Security uses large language models to create simulated phishing emails. The platform could, for example, generate an email purporting to be from a customer interested in a newly launched product. The LLMs can customize the message based on publicly-available information about the product. Distributing a simulated phishing email to employees often requires changing the configuration of a company's email system. According to Adaptive Security, its platform eases the task with a feature called Direct Email Injection. It removes the need to modify an email system's cybersecurity and filtering settings. Adaptive Security's platform is also capable of interacting with workers through other channels. It can mimic the voice of executives and package phishing messages into texts, which have a higher open rate than emails. Adaptive Security also offers features that can impersonate the support staff of a company's suppliers. The platform enables administrators to monitor cyberattack simulations via a centralized dashboard. It displays the number of simulated phishing emails that were launched as part of a test, their effectiveness and related metrics. Cybersecurity teams can use this information to identify areas for improvement in their company's cybersecurity posture. Workers, meanwhile, have access to a library of gamified cybersecurity training modules. Adaptive Security also enables customers to create custom training materials using AI. Another feature of the company's platform allows employees to notify the cybersecurity team when they receive a real phishing message. "When someone reports a suspicious message, our AI doesn't just forward it to IT -- it analyzes the message in real time, scores the risk, and helps security teams act fast," Adaptive Security co-founder and Chief Executive Officer Brian Long detailed in a blog post today.
[5]
OpenAI enters cybersecurity With $43M deepfake bet
In its first cybersecurity investment, OpenAI co-led a $43 million Series A funding round for Adaptive Security, a startup specializing in defending against AI-driven deepfake attacks. With generative AI enhancing the capabilities of hackers, including the ability to create convincing deepfakes and counterfeit documents, OpenAI is directly addressing the rising threat by backing AI-driven defense mechanisms. Adaptive Security, established in New York, secured $43 million in a Series A round co-led by OpenAI's startup fund and Andreessen Horowitz. OpenAI confirmed this investment marks its initial foray into the cybersecurity sector. Adaptive Security utilizes simulated AI-generated hacks to train employees in identifying and mitigating these advanced threats. The platform simulates attacks across various channels, including phone calls, texts, and emails, to evaluate vulnerabilities and train staff. The company focuses on social engineering tactics targeting human vulnerabilities, where employees might be tricked into compromising security. CEO and co-founder Brian Long noted the increasing ease of executing social engineering attacks with AI tools. Launched in 2023, Adaptive Security serves over 100 customers, with positive feedback playing a role in attracting OpenAI's investment. Axie Infinity's $600 million loss in 2022 due to a manipulated job offer demonstrates the potential damage from such attacks. Long's prior ventures include TapCommerce, acquired by Twitter in 2014 for over $100 million. He also founded ad-tech firm Attentive, valued at more than $10 billion in 2021 by one of its investors. Adaptive Security intends to allocate the new funding primarily to hiring engineers and advancing its product development to counter the AI threats. Other cyber startups are also addressing AI threats, including Cyberhaven, which recently raised $100 million to prevent sensitive data leaks into tools like ChatGPT, and Snyk, which has seen increased demand due to insecure AI-generated code. Regarding personal security, Long advises individuals concerned about voice cloning to "delete your voicemail."
[6]
Why OpenAI and Andreessen Horowitz Just Bet $43 Million on This Little-Known Startup
According to a blog post on Adaptive Security's website, this is the first time OpenAI's Startup Fund has ever invested in a cybersecurity company. Like most other cybersecurity firms, which offer software to keep your data secured and protected from hacks, Adaptive Security uses an AI system to analyze messages that have been flagged as suspicious, assigns them risk levels, and sends a report to your company's security team. But Adaptive Security doesn't just alert you to potential security breaches -- it also stress-tests human employees by attempting to trick them into falling for scams. Most hacks and cybersecurity breaches occur because of human error. An employee gets a message that seems to be from their manager, requesting that they take a specific action, and then suddenly the entire enterprise is at risk. These kinds of scams are becoming more commonplace, because criminals can use generative AI to greatly increase the volume, frequency, and methods by which they attempt to gain access into secure systems.
Share
Share
Copy Link
OpenAI makes its first cybersecurity investment, co-leading a $43 million Series A funding round for Adaptive Security, a startup specializing in AI-driven defense against deepfake and social engineering attacks.
OpenAI, the company behind ChatGPT, has made its first significant foray into the cybersecurity sector with a $43 million investment in Adaptive Security. This New York-based startup, founded in 2023, specializes in defending against AI-driven cyber threats, particularly deepfakes and social engineering attacks 12.
The Series A funding round, co-led by OpenAI's startup fund and Andreessen Horowitz, marks a strategic shift for OpenAI. This move acknowledges the growing concerns about AI-powered cyberattacks and demonstrates OpenAI's commitment to addressing the potential misuse of its own technology 35.
Adaptive Security's platform uses AI to simulate sophisticated cyberattacks, training employees to recognize and respond to these threats. The company's approach goes beyond traditional cybersecurity measures:
The investment comes at a crucial time when generative AI technologies have significantly expanded the capabilities of cybercriminals. Social engineering attacks, which exploit human vulnerabilities, have become more sophisticated and easier to execute 15.
Adaptive Security joins a growing field of cybersecurity startups focusing on AI-related threats. Other notable players include:
Adaptive Security is led by CEO and co-founder Brian Long, a seasoned entrepreneur with previous successful ventures in ad-tech and mobile advertising 1. The company plans to use the new funding primarily for hiring engineers and advancing its product development to stay ahead in the AI "arms race" against bad actors 15.
This investment highlights the dual nature of AI in cybersecurity – both as a threat and a defense mechanism. As AI technologies continue to advance, the cybersecurity landscape is likely to evolve rapidly, requiring constant innovation and vigilance 23.
Reference
[4]
[5]
Swiss startup Lakera raises $20 million in Series A funding to protect businesses from generative AI security risks. The company's technology aims to prevent malicious prompts and ensure safe deployment of AI models.
5 Sources
5 Sources
Jericho Security secures $15 million in Series A funding to scale its AI-driven cybersecurity training platform, addressing the growing threat of deepfake fraud that has cost businesses $200 million in Q1 2025 alone.
2 Sources
2 Sources
DeepSeek's low-cost AI model development has raised concerns about security vulnerabilities, challenging the narrative of democratized AI and highlighting the importance of investment in robust AI infrastructure.
3 Sources
3 Sources
As AI transforms the cybersecurity landscape, businesses are increasingly adopting AI-powered defenses to combat sophisticated AI-driven threats, highlighting both the risks and opportunities in this technological arms race.
3 Sources
3 Sources
The generative AI cybersecurity market is projected to reach $40.1 billion by 2032, with tech giants leading the way. Meanwhile, ethical hackers at DEF CON highlight potential vulnerabilities in AI systems.
2 Sources
2 Sources