4 Sources
[1]
A summer of security: empowering cyber defenders with AI
And when it comes to security opportunities -- we're thrilled to be driving progress in three key areas ahead of the summer's biggest cybersecurity conferences like Black Hat USA and DEF CON 33: agentic capabilities, next-gen security model and platform advances, and public-private partnerships focused on putting these tools to work. Last year, we announced Big Sleep, an AI agent developed by Google DeepMind and Google Project Zero, that actively searches and finds unknown security vulnerabilities in software. By November 2024, Big Sleep was able to find its first real-world security vulnerability, showing the immense potential of AI to plug security holes before they impact users. Since then, Big Sleep has continued to discover multiple real-world vulnerabilities, exceeding our expectations and accelerating AI-powered vulnerability research. Most recently, based on intel from Google Threat Intelligence, the Big Sleep agent discovered an SQLite vulnerability (CVE-2025-6965) -- a critical security flaw, and one that was known only to threat actors and was at risk of being exploited. Through the combination of threat intelligence and Big Sleep, Google was able to actually predict that a vulnerability was imminently going to be used and we were able to cut it off beforehand. We believe this is the first time an AI agent has been used to directly foil efforts to exploit a vulnerability in the wild. These AI advances don't just help secure Google's products. Big Sleep is also being deployed to help improve the security of widely used open-source projects -- a major win for ensuring faster, more effective security across the internet more broadly. These cybersecurity agents are a game changer, freeing up security teams to focus on high-complexity threats, dramatically scaling their impact and reach. But of course this work needs to be done safely and responsibly. In our latest white paper, we outline our approach to building AI agents in ways that safeguard privacy, mitigate the risks of rogue actions, and ensure the agents operate with the benefit of human oversight and transparency. When deployed according to secure-by-design principles, agents can give defenders an edge like no other tool that came before them. We will continue to share our agentic AI insights and report findings through our industry-standard disclosure process. You can keep tabs on all publicly disclosed vulnerabilities from Big Sleep on our issue tracker page.
[2]
Google expands AI-driven cybersecurity efforts with updates ahead of Black Hat and DEF CON - SiliconANGLE
Google expands AI-driven cybersecurity efforts with updates ahead of Black Hat and DEF CON Google LLC is gearing up its push into artificial intelligence-powered cybersecurity with a series of major announcements this week ahead of the Black Hat USA and DEF CON 33 conferences in early August. The search giant is spotlighting the growing role of agentic AI in detecting and preventing threats, including updates on its Big Sleep vulnerability discovery agent, a new AI-enabled version of its forensic tool Timesketch and its insider threat detection system FACADE. Leading the list of announcements is news relating to Big Sleep, an autonomous AI agent launched last year to proactively hunt for unknown software vulnerabilities. Google has revealed that Big Sleep recently identified a critical SQLite flaw based on threat intelligence from the Google Threat Intelligence Group. The vulnerability discovered by Big Sleep had been known only to threat actors and was at risk of imminent exploitation. According to Google, this marks the first time an AI agent has directly prevented a live cyberattack, a milestone in AI-driven defense. Big Sleep is also being deployed to help improve the security of widely used open-source projects, a move Google describes as a "major win for ensuring faster, more effective security across the internet more broadly." Ahead of the Black Hat conference, Google is also debuting new agentic capabilities for Timesketch, its open-source digital forensics platform. The upgraded version of Timesketch can now autonomously perform initial forensic investigations by analyzing logs and summarizing findings, cutting response times and allowing security analysts to focus on more complex tasks. Google is also providing the first in-depth technical look at FACADE (Fast and Accurate Contextual Anomaly Detection Engine), an AI-based insider threat detection system used internally since 2018. Interestingly, FACADE doesn't rely on historical attack data but instead leverages contrastive learning to flag suspicious activity in real time, processing billions of events daily across Google's infrastructure. The company will also be on the ground at DEF CON 33, where it's partnering with Airbus SE to host a Capture the Flag competition. The event will feature challenges designed to showcase how AI assistants can collaborate with human participants to solve real-world security puzzles. Google will also support the final stage of the Defense Advanced Research Projects Agency-led AI Cyber Challenge, where competitors will demonstrate AI tools designed to secure open-source software. Google says it's taking a responsible approach to the rise of agentic AI in security. The company plans to donate data from its Secure AI Framework to the Coalition for Secure AI to support collaborative workstreams on agent safety, cyber defense and software supply chain security. To support its commitment to responsible use, Google has also released a new white paper detailing its approach to building AI agents that are secure by design. The paper emphasizes principles like human oversight, transparency and risk mitigation and how agentic systems can be deployed responsibly to maximize cybersecurity impact while safeguarding privacy and preventing unintended actions. "This summer's advances in AI have the potential to be game-changing, but what we do next matters," said Kent Walker, president of Global Affairs at Google. "By building these tools the right way, applying them in new ways and working together with industry and governments to deploy them at scale, we can usher in a digital future that's not only more prosperous, but also more secure."
[3]
Agentic AI Turns Enterprise Cybersecurity Into Machine vs. Machine Battle | PYMNTS.com
Fraudsters have traditionally exploited that window, often with catastrophic results. Now, Google is showing that the arms race may no longer be moving at human speed, potentially signaling an end to the era of overloaded analysts chasing alerts and engineers patching software after the fact. The tech giant unveiled several updates Tuesday (July 15) around agentic artificial intelligence-powered cybersecurity. Google is developing autonomous systems that can detect, decide and respond to threats in real time -- often without human intervention. "Our AI agent Big Sleep helped us detect and foil an imminent exploit," Sundar Pichai, CEO of Google and its parent company, Alphabet, posted on social platform X. "We believe this is a first for an AI agent -- definitely not the last -- giving cybersecurity defenders new tools to stop threats before they're widespread." For business leaders, especially chief information security officers (CISOs) and chief financial officers (CFOs), this rising reality may pose new questions. Are enterprise organizations ready for defense at machine speed? What's the cost of not adopting these tools? Who's accountable when AI systems take action? Read also: What B2B Firms Can Learn From Big Tech's Cybersecurity Initiatives From Threat Reaction to Autonomous Prevention Historically, zero-day vulnerabilities -- unknown security flaws in software or hardware -- are discovered by adversaries first, exploited quietly, and later disclosed after damage has occurred. Big Sleep reversed that pattern. No alerts, no tip-offs -- just AI running autonomously and flagging a high-risk issue before anyone else even knew it existed. For CISOs, this means a new category of tools is emerging. They're AI-first threat prevention platforms that don't wait for alerts but seek out weak points in code, configurations or behavior, and they take defensive action automatically. For CFOs, it signals a change in cybersecurity economics. Prevention at this scale is potentially cheaper and more scalable than the human-powered models of the past. But that's only if the AI is accurate and accountable. "The models are only as good as the data being fed to them," Boost Payment Solutions Chief Technology Officer Rinku Sharma told PYMNTS in April. "Garbage in, garbage out holds true even with agentic AI." The PYMNTS Intelligence report "The AI MonitorEdge Report: COOs Leverage GenAI to Reduce Data Security Losses" found that the share of chief operating officers (COOs) who said their companies had implemented AI-powered automated cybersecurity management systems leapt from 17% in May 2024 to 55% in August. The report found that these COOs adopted new AI-based systems because they could identify fraudulent activities, detect anomalies and provide real-time threat assessments. See also: Payments Execs Say AI Agents Give Payments an Autonomous Overhaul Agentic AI and Risk Accountability at the Edge of the Front Line With power comes responsibility, and in cybersecurity, that translates to risk ownership. Agentic AI systems, by definition, act independently. That autonomy introduces new challenges for governance and compliance. Who's responsible if an AI mistakenly flags a critical system and shuts it down? What happens if the AI fails to detect a breach? "This isn't a technical upgrade; it's a governance revolution," Kathryn McCall, chief legal and compliance officer at Trustly, told PYMNTS in June. "You've got to treat these AI agents as non-human actors with unique identities in your system," she added. "You need audit logs, human-readable reasoning and forensic replay." The emergence of agentic AI solutions for cybersecurity also has enterprise composition implications. As workforces remain hybrid and attack surfaces widen, endpoint security is only as good as its weakest device. Bringing autonomous protection to the edge -- phones, browsers, apps -- may no longer be optional. Stax Chief Technology Officer Mark Sundt told PYMNTS in June that if agentic AI is the engine, orchestration is the transmission. Without a central conductor, even the most capable agents act in isolation. "You've got agents to agents ... but who's driving the process?" Sundt said. "Who's doing the orchestration?" In that light, cybersecurity investments must now answer a new question: How much decision-making power are we ready to give our machines? The adversaries aren't waiting, and the AI agents aren't slowing down. For WEX Chief Digital Officer Karen Stroup, the best approach to deploying agentic AI involves a disciplined strategy of experimentation. "If you're going to experiment with agentic AI or any type of AI solutions, you want to focus on two things," she told PYMNTS in April. "One is the areas where you're most likely to have success. And two, is there going to be a good return on that investment?" For all PYMNTS AI and digital transformation coverage, subscribe to the daily AI and Digital Transformation Newsletters.
[4]
Google Adds Agentic AI Capabilities to Timesketch Cybersecurity Platform | PYMNTS.com
This is one of several updates around AI-powered cybersecurity features that Walker announced in the post. Walker also said that Big Sleep, an AI agent developed by Google DeepMind and Google Project Zero and announced last year, found its first real-world security vulnerability in November and has discovered several more since then. Big Sleep was developed to search and find unknown security vulnerabilities in software, and it is "exceeding our expectations and accelerating AI-powered vulnerability research," Walker said in the post. "These AI advances don't just help secure Google's products," Walker said. "Big Sleep is also being deployed to help improve the security of widely used open-source projects -- a major win for ensuring faster, more effective security across the internet more broadly." In addition, Walker announced in the post that Google will donate data from its Secure AI Framework (SAIF) to help accelerate the agentic AI, cyber defense and software supply chain security workstreams of the Coalition for Secure AI (CoSAI). Launched by Google and industry partners, CoSAI aims to ensure the safe implementation of AI systems, according to the post. Walker also said that the final round of Google's two-year AI Cyber Challenge (AIxCC) will come to a close next month and that the participants will unveil new AI tools to help find and fix vulnerabilities at that time. "We have always believed in AI's potential to make the world safer, but over the last year, we have seen real leaps in its capabilities, with new tools redefining what lasting and durable cybersecurity can look like," Walker wrote in the post. The PYMNTS Intelligence report "The AI MonitorEdge Report: COOs Leverage GenAI to Reduce Data Security Losses" found that the share of chief operating officers (COOs) who said their companies had implemented AI-powered automated cybersecurity management systems leapt from 17% in May 2024 to 55% in August. The report found that these COOs adopted new AI-based systems because they could identify fraudulent activities, detect anomalies and provide real-time threat assessments.
Share
Copy Link
Google unveils groundbreaking advancements in AI-driven cybersecurity, including Big Sleep's detection of real-world vulnerabilities and the integration of agentic AI capabilities into various security platforms.
In a significant leap forward for artificial intelligence in cybersecurity, Google has announced a series of groundbreaking developments centered around its AI agents. These advancements, revealed ahead of major cybersecurity conferences like Black Hat USA and DEF CON 33, showcase the growing potential of AI to revolutionize digital defense strategies 1.
Source: PYMNTS
At the forefront of Google's innovations is Big Sleep, an AI agent developed by Google DeepMind and Google Project Zero. Initially announced last year, Big Sleep has surpassed expectations by discovering its first real-world security vulnerability in November 2024. Since then, it has continued to uncover multiple vulnerabilities, demonstrating the immense potential of AI in proactive security measures 1.
In a landmark achievement, Big Sleep recently identified a critical SQLite vulnerability (CVE-2025-6965) based on intelligence from Google's Threat Intelligence Group. This vulnerability was previously known only to threat actors and was at risk of imminent exploitation. Google claims this marks the first instance of an AI agent directly preventing a live cyberattack, signaling a new era in AI-driven defense 2.
Source: SiliconANGLE
Google is not limiting its AI advancements to Big Sleep alone. The company has announced the integration of agentic AI capabilities into Timesketch, its open-source digital forensics platform. The upgraded version can now autonomously perform initial forensic investigations, analyzing logs and summarizing findings. This enhancement significantly reduces response times and allows security analysts to focus on more complex tasks 2.
Additionally, Google has provided insights into FACADE (Fast and Accurate Contextual Anomaly Detection Engine), an AI-based insider threat detection system used internally since 2018. FACADE employs contrastive learning to flag suspicious activity in real-time, processing billions of events daily across Google's infrastructure 2.
Recognizing the power and potential risks of agentic AI in security, Google has released a white paper outlining its approach to building AI agents that are secure by design. The paper emphasizes principles such as human oversight, transparency, and risk mitigation 1.
Google is also fostering industry collaboration through various initiatives. The company plans to donate data from its Secure AI Framework to the Coalition for Secure AI, supporting collaborative workstreams on agent safety, cyber defense, and software supply chain security 4.
Source: PYMNTS
The rise of agentic AI in cybersecurity presents both opportunities and challenges for enterprises. While these AI-powered tools offer unprecedented capabilities in threat detection and prevention, they also raise questions about governance, compliance, and risk accountability 3.
As the cybersecurity landscape evolves into a machine-vs-machine battle, businesses must consider how to integrate these advanced AI systems into their existing security frameworks. The adoption of such technologies may require a shift in cybersecurity economics and a reevaluation of risk management strategies 3.
Google launches its new Pixel 10 smartphone series, showcasing advanced AI capabilities powered by Gemini, aiming to challenge competitors in the premium handset market.
20 Sources
Technology
7 hrs ago
20 Sources
Technology
7 hrs ago
Google's Pixel 10 series introduces groundbreaking AI features, including Magic Cue, Camera Coach, and Voice Translate, powered by the new Tensor G5 chip and Gemini Nano model.
12 Sources
Technology
7 hrs ago
12 Sources
Technology
7 hrs ago
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather with improved accuracy, potentially helping to protect Earth's infrastructure from solar storm damage.
6 Sources
Technology
15 hrs ago
6 Sources
Technology
15 hrs ago
Google's latest smartwatch, the Pixel Watch 4, introduces significant upgrades including a curved display, enhanced AI features, and improved health tracking capabilities.
17 Sources
Technology
7 hrs ago
17 Sources
Technology
7 hrs ago
FieldAI, a robotics startup, has raised $405 million to develop "foundational embodied AI models" for various robot types. The company's innovative approach integrates physics principles into AI, enabling safer and more adaptable robot operations across diverse environments.
7 Sources
Technology
7 hrs ago
7 Sources
Technology
7 hrs ago