2 Sources
[1]
A summer of security: empowering cyber defenders with AI
And when it comes to security opportunities -- we're thrilled to be driving progress in three key areas ahead of the summer's biggest cybersecurity conferences like Black Hat USA and DEF CON 33: agentic capabilities, next-gen security model and platform advances, and public-private partnerships focused on putting these tools to work. Last year, we announced Big Sleep, an AI agent developed by Google DeepMind and Google Project Zero, that actively searches and finds unknown security vulnerabilities in software. By November 2024, Big Sleep was able to find its first real-world security vulnerability, showing the immense potential of AI to plug security holes before they impact users. Since then, Big Sleep has continued to discover multiple real-world vulnerabilities, exceeding our expectations and accelerating AI-powered vulnerability research. Most recently, based on intel from Google Threat Intelligence, the Big Sleep agent discovered an SQLite vulnerability (CVE-2025-6965) -- a critical security flaw, and one that was known only to threat actors and was at risk of being exploited. Through the combination of threat intelligence and Big Sleep, Google was able to actually predict that a vulnerability was imminently going to be used and we were able to cut it off beforehand. We believe this is the first time an AI agent has been used to directly foil efforts to exploit a vulnerability in the wild. These AI advances don't just help secure Google's products. Big Sleep is also being deployed to help improve the security of widely used open-source projects -- a major win for ensuring faster, more effective security across the internet more broadly. These cybersecurity agents are a game changer, freeing up security teams to focus on high-complexity threats, dramatically scaling their impact and reach. But of course this work needs to be done safely and responsibly. In our latest white paper, we outline our approach to building AI agents in ways that safeguard privacy, mitigate the risks of rogue actions, and ensure the agents operate with the benefit of human oversight and transparency. When deployed according to secure-by-design principles, agents can give defenders an edge like no other tool that came before them. We will continue to share our agentic AI insights and report findings through our industry-standard disclosure process. You can keep tabs on all publicly disclosed vulnerabilities from Big Sleep on our issue tracker page.
[2]
Google expands AI-driven cybersecurity efforts with updates ahead of Black Hat and DEF CON - SiliconANGLE
Google expands AI-driven cybersecurity efforts with updates ahead of Black Hat and DEF CON Google LLC is gearing up its push into artificial intelligence-powered cybersecurity with a series of major announcements this week ahead of the Black Hat USA and DEF CON 33 conferences in early August. The search giant is spotlighting the growing role of agentic AI in detecting and preventing threats, including updates on its Big Sleep vulnerability discovery agent, a new AI-enabled version of its forensic tool Timesketch and its insider threat detection system FACADE. Leading the list of announcements is news relating to Big Sleep, an autonomous AI agent launched last year to proactively hunt for unknown software vulnerabilities. Google has revealed that Big Sleep recently identified a critical SQLite flaw based on threat intelligence from the Google Threat Intelligence Group. The vulnerability discovered by Big Sleep had been known only to threat actors and was at risk of imminent exploitation. According to Google, this marks the first time an AI agent has directly prevented a live cyberattack, a milestone in AI-driven defense. Big Sleep is also being deployed to help improve the security of widely used open-source projects, a move Google describes as a "major win for ensuring faster, more effective security across the internet more broadly." Ahead of the Black Hat conference, Google is also debuting new agentic capabilities for Timesketch, its open-source digital forensics platform. The upgraded version of Timesketch can now autonomously perform initial forensic investigations by analyzing logs and summarizing findings, cutting response times and allowing security analysts to focus on more complex tasks. Google is also providing the first in-depth technical look at FACADE (Fast and Accurate Contextual Anomaly Detection Engine), an AI-based insider threat detection system used internally since 2018. Interestingly, FACADE doesn't rely on historical attack data but instead leverages contrastive learning to flag suspicious activity in real time, processing billions of events daily across Google's infrastructure. The company will also be on the ground at DEF CON 33, where it's partnering with Airbus SE to host a Capture the Flag competition. The event will feature challenges designed to showcase how AI assistants can collaborate with human participants to solve real-world security puzzles. Google will also support the final stage of the Defense Advanced Research Projects Agency-led AI Cyber Challenge, where competitors will demonstrate AI tools designed to secure open-source software. Google says it's taking a responsible approach to the rise of agentic AI in security. The company plans to donate data from its Secure AI Framework to the Coalition for Secure AI to support collaborative workstreams on agent safety, cyber defense and software supply chain security. To support its commitment to responsible use, Google has also released a new white paper detailing its approach to building AI agents that are secure by design. The paper emphasizes principles like human oversight, transparency and risk mitigation and how agentic systems can be deployed responsibly to maximize cybersecurity impact while safeguarding privacy and preventing unintended actions. "This summer's advances in AI have the potential to be game-changing, but what we do next matters," said Kent Walker, president of Global Affairs at Google. "By building these tools the right way, applying them in new ways and working together with industry and governments to deploy them at scale, we can usher in a digital future that's not only more prosperous, but also more secure."
Share
Copy Link
Google announces major advancements in AI-driven cybersecurity, including the first-ever prevention of a live cyberattack by an AI agent, ahead of Black Hat USA and DEF CON 33 conferences.
In a groundbreaking development, Google has announced that its AI agent, Big Sleep, has successfully prevented a live cyberattack for the first time. This milestone achievement comes as part of Google's expanded efforts in AI-driven cybersecurity, unveiled ahead of the prestigious Black Hat USA and DEF CON 33 conferences 12.
Source: SiliconANGLE
Big Sleep, an autonomous AI agent launched last year to proactively hunt for unknown software vulnerabilities, recently identified a critical SQLite flaw (CVE-2025-6965) based on intelligence from the Google Threat Intelligence Group. This vulnerability was previously known only to threat actors and was at imminent risk of exploitation 1.
Google's cybersecurity push extends beyond Big Sleep, encompassing several key areas:
Timesketch Upgrade: Google has introduced new agentic capabilities to Timesketch, its open-source digital forensics platform. The enhanced version can now autonomously perform initial forensic investigations by analyzing logs and summarizing findings, significantly reducing response times 2.
FACADE System: For the first time, Google is providing an in-depth technical look at FACADE (Fast and Accurate Contextual Anomaly Detection Engine), an AI-based insider threat detection system used internally since 2018. FACADE leverages contrastive learning to flag suspicious activity in real-time, processing billions of events daily across Google's infrastructure 2.
Open-Source Security: Big Sleep is being deployed to improve the security of widely used open-source projects, which Google describes as a "major win for ensuring faster, more effective security across the internet more broadly" 12.
Google emphasizes its commitment to responsible AI development in cybersecurity:
White Paper Release: The company has published a white paper outlining its approach to building AI agents that are secure by design. The paper focuses on principles such as human oversight, transparency, and risk mitigation 12.
Industry Collaboration: Google plans to donate data from its Secure AI Framework to the Coalition for Secure AI, supporting collaborative workstreams on agent safety, cyber defense, and software supply chain security 2.
DEF CON 33 Partnership: Google is partnering with Airbus SE to host a Capture the Flag competition at DEF CON 33, showcasing how AI assistants can collaborate with human participants to solve real-world security puzzles 2.
The advancements in AI-driven cybersecurity tools are expected to have far-reaching implications:
Enhanced Threat Detection: The ability of AI agents like Big Sleep to predict and prevent vulnerabilities before they are exploited represents a significant leap in proactive cybersecurity measures 1.
Efficiency Boost: By automating initial forensic investigations and anomaly detection, tools like Timesketch and FACADE allow security teams to focus on more complex threats, dramatically scaling their impact and reach 12.
Industry-Wide Benefits: Google's efforts in open-source security and collaboration with industry partners aim to improve cybersecurity across the entire digital landscape 2.
As Kent Walker, president of Global Affairs at Google, stated, "This summer's advances in AI have the potential to be game-changing, but what we do next matters." The company's focus on responsible AI development and deployment in cybersecurity sets a precedent for the industry, potentially ushering in a more secure digital future 2.
Mira Murati's AI startup Thinking Machines Lab secures a historic $2 billion seed round, reaching a $12 billion valuation. The company plans to unveil its first product soon, focusing on collaborative general intelligence.
9 Sources
Startups
13 hrs ago
9 Sources
Startups
13 hrs ago
Meta's new Superintelligence Lab is considering abandoning its open-source AI model, Behemoth, in favor of developing closed models, marking a significant shift in the company's AI strategy and potentially reshaping the AI landscape.
7 Sources
Technology
21 hrs ago
7 Sources
Technology
21 hrs ago
AMD and Nvidia receive approval to resume sales of specific AI chips to China, marking a significant shift in US trade policy and potentially boosting their revenues.
5 Sources
Business and Economy
21 hrs ago
5 Sources
Business and Economy
21 hrs ago
Leading AI researchers from major tech companies and institutions urge the industry to prioritize studying and preserving Chain-of-Thought (CoT) monitoring capabilities in AI models, viewing it as a crucial but potentially fragile tool for AI safety.
3 Sources
Technology
5 hrs ago
3 Sources
Technology
5 hrs ago
Tech giants Google and Meta announce multi-billion dollar investments in data centers and AI infrastructure across the US, with a focus on Pennsylvania and the PJM Interconnection region.
9 Sources
Business and Economy
21 hrs ago
9 Sources
Business and Economy
21 hrs ago