Curated by THEOUTPOST
On Tue, 25 Mar, 12:03 AM UTC
11 Sources
[1]
Microsoft announces security AI agents to help overwhelmed humans
Tom Warren is a senior editor and author of Notepad, who has been covering all things Microsoft, PC, and tech for over 20 years. Microsoft launched its AI-powered Security Copilot a year ago to bring a chatbot to the cybersecurity space, and now it's expanding it with AI agents that are designed to autonomously assist overwhelmed security teams. Microsoft is unveiling six of its own AI agents for its Security Copilot, as well as five that have been created by its partners Microsoft's six security agents will be available in preview next month, and are designed to do things like triage and process phishing and data loss alerts, prioritize critical incidents, and monitor for vulnerabilities. "The six Microsoft Security Copilot agents enable teams to autonomously handle high-volume security and IT tasks while seamlessly integrating with Microsoft Security solutions," says Vasu Jakkal, corporate vice president of Microsoft Security. Microsoft is also working with OneTrust, Aviatrix, BlueVoyant, Tanium, and Fletch to enable some third-party security agents. These extensions will make it easier to analyze data breaches with OneTrust or perform root cause analysis of network outages and failures with Aviatrix. AI agents are becoming an increasingly popular way for companies like Microsoft to sell businesses on AI tools. Microsoft relaunched its Copilot for businesses earlier this year with free AI chat and access to pay-as-you-go AI agents. While these latest AI agents in the Security Copilot are designed for security teams to take advantage of, Microsoft is also improving its phishing protection in Microsoft Teams. Microsoft Defender for Office 365 will start protecting Teams users against phishing and other cyberthreats within Teams next month, including better protection against malicious URLs and attachments. Microsoft also has a lot of other industry-specific security announcements today and at its upcoming Microsoft Secure event on April 9th. You can read more about them over at Microsoft's security blog.
[2]
Microsoft's new AI agents aim to help security pros combat the latest threats
Designed for Microsoft's Security Copilot tool, the AI-powered agents will automate basic tasks, freeing IT and security staff to tackle more complex issues. Microsoft is launching a series of AI agents for its Security Copilot program designed to help professionals more easily protect their organizations against today's security threats. Announced on Monday, Microsoft built six of the new agents, while five come from third-party partners. All will be available for preview starting in April. Integrated with the software giant's security products, the six Microsoft-created agents aim to help security teams handle high-volume security and IT tasks. Taking their cues from Microsoft's Zero Trust framework, these agents will also learn from user feedback and adapt to internal workflows. Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses The six Microsoft agents are described as follows: Next up are the five third-party agents, all of which will be available in Security Copilot. Officially launched about a year ago, Microsoft Security Copilot uses AI to monitor and analyze security threats that could impact your organization. Like any AI, the product tries to automate as much of the process as possible. The primary goal is to free up IT and security staffers from repetitive or time-consuming tasks. But this type of AI can also offer guidance to help staff determine how and where to focus their efforts, allowing them to respond to security threats more quickly and effectively. Also: AI bots scraping your data? This free tool gives those pesky crawlers the run-around Security Copilot is offered on a pay-as-you-go model, allowing organizations to start small and increase their usage as needed. The tool's cost is billed monthly through a Security Compute Unit (SCU) at $4 per hour. Estimating one SCU for 24 hours daily for an entire month, Microsoft pegs the monthly cost at around $2,920. "Today's security professional has a perpetual onslaught of alerts and issues coming at them, often with limited context," Kris Bondi, CEO and co-founder of security company Mimoto, told ZDNET. "While AI agents aren't able to detect a threat, they should be able to help in responding to what has been found. An AI agent can be trained that when presented with specific cues to automatically execute a multi-step response. Removing some percentage of what security professionals must analyze would help what is currently an overwhelming list of tasks." However, today's AI technology is prone to error. A tool like Security Copilot can fail to catch legitimate security threats and trigger false positives. That's why human intervention is always needed. Plus, this security product remains relatively new, and many organizations are still trying to figure out how to adopt it. Also: How AI agents help hackers steal your confidential data - and what to do about it "AI agents promise improved threat response, but results from baseline models haven't been overwhelming, with many customers reporting that even high-tier solutions miss significant numbers of threats," J. Stephen Kowski, Field CTO at SlashNext Email Security+, told ZDNET. "Microsoft's Security Copilot shows promise, but adoption has been slower than expected due to lingering questions about data handling, required services, and licensing costs."
[3]
AI agents swarm Microsoft Security Copilot
Looking to sort through large volumes of security info? Redmond has your backend Microsoft's Security Copilot is getting some degree of agency, allowing the underlying AI model to interact more broadly with the company's security software to automate various tasks. Security Copilot showed up in 2023 promising automatic triage of security incidents in Microsoft Defender XDR. At a press event on March 20 at Microsoft's San Francisco office, Vasu Jakkal, corporate vice president of security, compliance, identity, and management at Microsoft, revealed an expanded flight plan for Security Copilot, which is now assisted by 11 task-specific AI agents that interact with products like Defender, Purview, Entra, and Intune. "We are in the era of agentic AI," Jakkal said. "I'm sure everywhere you go, you hear agents and agents and agents. What are these agents? So, well, agents are all around us." Jakkal went on to note that in a conversation with a colleague, the question was posed: "What is an agent?" Her reply was: "That's a great question," and yet she went on without answering it. That established the pattern for the event - questions like "in what ways have agents failed when deployed?" and "what's the cost of running this in compute resources?" tended to go unanswered. But Jakkal did say that of the 11 Security Copilot agents introduced, five come from Microsoft Security partners. The Microsoft-made agents include: Microsoft Security partners have also contributed to the agent pool: The eleventh agent resides in Microsoft Purview Data Security Investigations (DSI), an AI-based service designed to help data security teams deal with data exposure risks. Essentially, these agents use the natural language capabilities of generative AI to automate the summarization of high-volume data like phishing warnings or threat alerts so that human decision makers can focus on signals deemed to be the most pressing. This fits with Jakkal's thesis that the security landscape is changing faster than people can handle, making it necessary to rely on non-deterministic macros, or AI agents in more modern jargon. "You look at this web landscape, the speed, the scale, and the sophistication is increasing dramatically," she said. "From last year when we were seeing 4,000 attacks per second, we're seeing 7,000 attacks per second. That translates to 600 million attacks a day." Jakkal said the initial iteration of Security Copilot has already helped organizations deal with high-velocity threats. "For security teams using it, we've seen a 30 percent reduction in mean time to respond," she said, without elaborating on the cost of that improvement. "That means the time it takes them to respond to security incidents. We've seen early career talent, people who really wanted security but didn't know how to get started, being 26 percent faster, 35 percent more accurate. And even for seasoned professionals, we've seen them get 22 percent faster and 7 percent more accurate." Intrigued by the way in which AI agents might go wrong, The Register chatted with Tori Westerhoff, director in AI safety and security red teaming at Microsoft, about what her team had learned during the development of these agents. Westerhoff expressed confidence in Microsoft's overall approach to AI security, pointing to a blog post last year on the subject and noting that the AI models already come with guardrails and that her team has done a lot of work to limit cross-prompt injection. "We've been pushing this to product devs so that they're building with the awareness of how cross-prompt injection works," she said. Pressed to provide an example of false positive rates or related metrics for failures that emerged during the development of Security Copilot agents, Westerhoff said: "So I think in terms of specific product operations, I can't talk through those," but she allowed that Microsoft's red team does work with product developers prior to launch on AI hallucinations and hardening agentic systems. She went on to explain: "I think you're asking, 'Hey, what's the thing that's going to go wrong with this?' And I think the beauty of my team is that we work through those things and try to find any soft spots for any high-risk GenAI before launch, well, before it actually gets to customers." So, nothing to worry about. Nick Goodman, product architect for Security Copilot, showed off how the Phishing Triage Agent in Defender worked. "Everybody has phishing solutions," he explained. "Even despite phishing solutions, we train our employees to report phishing. And they do. Lots and lots of reports. Ninety-five percent of them are false positives. They each take about 30 minutes. And so our analysts spend most of their time triaging false positives. That's what this agent is going to help us with." At the same time, the customer still has to help the agent. Goodman showed how one company-wide email was flagged as a true positive - an actual phishing message - based on its characteristics, like language urging rapid action. Goodman said the message, despite its appearance and spammy language, was actually a legitimate HR communique. "The agent can't know that because it lacks my context," he said. "So it flags it for my review." Goodman went ahead and changed the classification of the message from suspected phishing to legitimate, and this instructed the agent how to do better next time. "This is learning," he said. "It's applied for this agent going forward, but only to me. It's not shared with Microsoft, not shared with other customers. There's no foundational model training happening. This is my context. This is literally all I have to do to start training the system, very much the same way you would train a human analyst." But without the salary, benefits, or desk occupancy. Asked how much Microsoft expects this system might save in labor costs, Goodman replied: "I don't have any studies we can share with you. Our standard for studies that we publish is pretty high." Goodman said that customers are using Security Copilot for this sort of phishing triage already. Asked whether Microsoft has a sense of the false positive rate out of the box compared to after training, Goodman said: "The input false positive rate is driven by human behavior [based on what people report]. The output rate, like the percentage triaged, I don't have numbers to share. We're in the evaluation period with customers right now." Ojas Rege, SVP and general manager of privacy and data governance at OneTrust, showed off how the company's Privacy Breach Response Agent might help corporate privacy officers deal with data breach reports. "If you have a data breach, in your blast radius analysis, you have a set of privacy obligations that you have to meet," he explained. "The challenge is that those breach notification regulations differ by every state, by every country, they're very complex and they're fragmented, and sometimes the notification numbers are really short, 72 hours." That's where the summarization capability of generative AI models comes into play. OneTrust's agent will construct a prioritized list of recommendations for the privacy or compliance officer to deal with, based on its analysis of data from OneTrust's regulatory research database. "The agent's not going to notify the regulatory authority," said Rege. "The agent's doing all the background work, but the human has to actually do the notification." Asked about the possibility of hallucination, Rege replied that the chances of hallucination are very narrow and that there's also an audit log that links to specific regulations, so the agent's recommendations can be confirmed. Microsoft's agents are here to help. You'll just need to check their work. ®
[4]
Microsoft deploys AI agents for cybersecurity
Zoom in: Starting next month, Microsoft will make six of its own new agents and five agents from partner companies available for preview in Security Copilot -- which is already integrated into all of Microsoft's security tools. Case in point: If an agent wrongly flags a training email as phishing, the security team can label it a false positive and instruct the agent not to flag messages from that vendor again. Between the lines: Microsoft says the new agents are a direct response to customer feedback. What they're saying: "There's just opportunity everywhere," Dorothy Li, corporate VP of Microsoft Security Copilot, told Axios. The intrigue: Microsoft also relied on an internal generative AI red team to pressure test the new agents for potential security risks.
[5]
"Another pair of eyes" - Microsoft launches all-new Security Copilot Agents to give security teams the upper hand
Microsoft is launching new Security Copilot Agents to help secure organizations with AI-first, end-to-end security platforms. The company says its new agents are designed to "autonomously assist with critical areas" like data security, identity management, and phishing. By working with some of the world's top software companies, Microsoft hopes to deliver "game-changing" protections and help customers "scale, augment, and increase the effectiveness of their privacy operations" to help organizations navigate the increasingly complex threat landscape and regulatory requirements. Microsoft's Global Head of Security, Vasu Jakkal, spoke to TechRadar Pro, to discuss the way that AI is changing the cybersecurity landscape, and how the new initiatives will help defenders use AI to their advantage. Jakkal noted how AI is supercharging the volume of cyberattacks, and lowering the barriers for access to malicious campaigns, overwhelming security teams who often don't have access to first-rate tools and rely on manual processes and 'fragmented defenses'. "So you look at these three core problems, threat landscape, operational complexity, and data security, there's no way humans can scale to keep up with these challenges. In fact, we don't have the human talent in security right now," he warns. To help security teams try and navigate this, Microsoft is unveiling 11 new Copilot agents. Six of these agents will be available across the Microsoft end-to-end security platform, and are designed to assist with threat protection, data security, device management, identity and access, and threat intelligence. Thenew launches come alongside Microsoft's release of five new Agentic solutions to help bolster security teams worldwide. These include a privacy breach response agent by OneTrust, a network supervisor agent by Aviatrix, a SecOps Tooling agent by BlueVoyant, an alert triage agent by Tanium, as well as a task optimizer agent by Fletch. So that teams can keep up with the quickly evolving landscape, Security Copilot Agents will enable teams to handle high-volume security and IT tasks, and will work seamlessly alongside existing Microsoft security tools. Microsoft Threat Intelligence now processes 84 trillion signals per day, revealing the exponential growth in cyber-attacks, including 7,000 password attacks per second. Although you can't ever eliminate the risk of human error entirely, these new tools will look to be a "another pair of eyes and pair of hands" to help double check things to reduce the risk factor, Jakkal explains. "Last year, in one year, we saw 30 billion phishing emails. That's a lot. And this volume, you just can't keep up, humans can't triage these. And so the phishing agent now can triage these emails and alerts, and it can tell you, hey, this is a false alarm and this is a true alert, so it kind of reduces that volume" Jakkal, like many others, describes cybersecurity as a cat and mouse game between cybercriminals and security teams. Right now, AI is the attacker's tool of choice and allows for a monumental number of intrusions, but the more attacks are leveraged, the more defenders can learn. "Microsoft processes 84 trillion signals every single day. That signal intelligence, it's hard for humans to just work through that and scan through, but guess what tool works really great with data? AI." For security teams to gain the upper hand, defenders must embrace AI, Jakkal argues, as the talent gap and skills shortage is holding the industry back, and cybersecurity teams, "just don't have enough defenders in the world," so must look to AI to keep up with demand. The barrage of attacks isn't likely to change anytime soon, either. Cyberattacks continue to be a profitable endeavour, and cybercrime is even helping fund rogue nations across the world, and with rising geopolitical tensions, cybersecurity teams must be more alert than ever before. "Attacks are happening all around and because ransomware is a very lucrative industry and in fact global cybercrime costs us 9.2 trillion dollars, US dollars a year," Jakkal concludes. "So as long as there's money to be made in it, we are going to see attacks and it can be even worse for a small and medium business because they don't have the staff to even tackle these problems."
[6]
Microsoft introduces AI agents for Security Copilot - SiliconANGLE
Microsoft Corp. is enhancing its Security Copilot service with a set of artificial intelligence agents that will automate repetitive tasks for users. The company detailed the agents today alongside upgrades to other parts of its cybersecurity portfolio. Launched last April, Security Copilot is a specialized version of Microsoft's Copilot AI assistant. It enables cybersecurity professionals to surface data about breaches using natural language prompts. Security Copilot also automates several related tasks, such as the process of setting up access controls for employee devices. Microsoft is extending Security Copilot's capabilities with six internally-developed AI agents. They will roll out alongside five partner-built agents from Aviatrix Systems Inc., OneTrust LLC, Tanium Inc., Fletch and BlueVoyant LLC. Three of the new Microsoft-developed tools focus on helping cybersecurity professionals sift through alerts. One, the Phishing Triage Agent, can review phishing alerts from a company's cybersecurity systems and filter false positives. It's joined by two agents designed to analyze notifications from Purview. This is a Microsoft application that detects when employees use business data in unauthorized manner. The fourth new addition to Security Copilot is the Conditional Access Optimization Agent. It's designed to work with Microsoft Entra, a tool that administrators use to regulate which employee can access what application. The new agent points out insecure user access rules and generates a fix that administrators can implement with one click. A fifth agent, Vulnerability Remediation Agent, integrates with Microsoft's Intune tool for managing Windows devices. The company says that administrators can now more quickly find vulnerable endpoints and apply operating system patches. Rounding out the agent lineup is the Threat Intelligence Briefing Agent. According to Microsoft, it enables Security Copilot to generate a report about cybersecurity threats that could pose a risk to an organization's systems. The tech giant's threat intelligence unit collects 84 trillion data points per day about online risks such as ransomware campaigns. The partner-developed agents that will roll out for Security Copilot address a number of use cases that aren't covered out of the box. Aviatrix's agent, for instance, promises to help customers troubleshoot network issues. OneTrust's tool makes it easier to comply with privacy regulations. "With security teams fully in control, agents accelerate responses, prioritize risks, and drive efficiency to enable proactive protection and strengthen an organization's security posture," Vasu Jakkal, the corporate vice president of Microsoft Security, wrote in a blog post today. The company is rolling out the agents alongside other enhancements to its cybersecurity portfolio. Edge for Business, an enterprise-focused version of Microsoft's browser, can now block workers from entering sensitive data into chatbots. The company will provide similar controls for desktop-based chatbot clients through integrations between Purview and third-party SASE, or secure access service edge, products.
[7]
Microsoft looks to AI to close window on hackers
'We are facing unprecedented complexity when it comes to the threat landscape,' says Microsoft's Vasu Jakkal. A surge in hacking attempts by criminals, fraudsters and spy agencies has reached a level of "unprecedented complexity" that only artificial intelligence will be able to combat, according to Microsoft. "Last year we tracked 30 billion phishing emails," says Vasu Jakkal, vice president of security at the US-based tech giant. "There's no way any human can keep up with the volume." In response, the company is launching 11 AI cybersecurity "agents" tasked with identifying and sifting through suspicious emails, blocking hacking attempts and gathering intelligence on where attacks may originate. With around 70% of the world's computers running Windows software and many businesses relying on their cloud computing infrastructure, Microsoft has long been the prime target for hackers. Unlike an AI assistant that might answer a user's query or book a hair appointment, an AI agent is a computer programme that autonomously interacts with its environment to carry out tasks without direct input from a user. In recent years, there's been a boom in marketplaces on the dark web offering ready-made malware programmes for carrying out phishing attacks, as well as the potential for AI to write new malware code and automate attacks, which has led to what Ms Jakkal describes as a "gig economy" for cybercriminals worth $9.2trn (£7.1trn). She says they have seen a five-fold increase in nation-state and organised crime groups they are tracking in cyberspace. "We are facing unprecedented complexity when it comes to the threat landscape," says Ms Jakkal. The AI agents, some created by Microsoft, and others made by external partners, will be incorporated into Microsoft's portfolio of AI tools called Copilot and will primarily serve their customers' IT and cybersecurity teams rather than individual Windows users. Because an AI can spot patterns in data and screen inboxes for dodgy-looking emails far faster than a human IT manager, specialist cybersecurity firms and now Microsoft have been launching "agentic" AI models to keep increasingly vulnerable users safe online. Read more from Sky News: Trump lashes out over portrait PM says minister for men 'not the answer' But others in the field are deeply concerned about unleashing autonomous AI agents across a user's computer or network. In an interview with Sky News last month, Meredith Whittaker, CEO of messaging app Signal, said: "Whether you call it an agent, whether you call it a bot, whether you call it something else, it can only know what's in the data it has access to, which means there is a hunger for your private data and there's a real temptation to do privacy invading forms of AI." Microsoft says its release of multiple cybersecurity agents ensures each AI has a very defined role, only allowing it access to data that's relevant to its task. It also applies what it calls a "zero trust framework" to its AI tools, which requires the company to constantly assess whether agents are playing by the rules they were programmed with. A roll-out of new AI cybersecurity software by a company as dominant as Microsoft will be closely watched. Last July, a tiny error in the software code of an application made cybersecurity firm CrowdStrike instantly crash around 8.5 million computers worldwide running Microsoft Windows, leaving users unable to restart their machines. The incident - described as the largest outage in the history of computing - affected airports, hospitals, rail networks and thousands of businesses including Sky News - some of which took days to recover.
[8]
Microsoft just unleashed 11 AI agents at once
Microsoft is rolling out a suite of AI-powered agents for its Security Copilot program, aiming to streamline security tasks for professionals. Announced on Monday and set for a preview release in April, the launch includes six Microsoft-built agents and five from third-party partners. Integrated with Microsoft's security products, the six in-house agents are designed to help security teams manage high-volume tasks. These agents will learn from user feedback and align with Microsoft's Zero Trust framework. The Microsoft agents include: The five third-party agents, all available in Security Copilot, feature: Launched a year ago, Microsoft Security Copilot uses AI to monitor and analyze security threats, automating tasks to free up IT staff. It offers guidance to help staff focus their efforts, improve response time and effectiveness. Microsoft now lets Copilot use your phone from PC Security Copilot operates on a pay-as-you-go model, billed monthly through a Security Compute Unit (SCU) at $4 per hour. Microsoft estimates a monthly cost of around $2,920 for one SCU used 24/7. Kris Bondi, CEO of Mimoto, noted that while AI agents can't detect threats, they can execute multi-step responses based on specific cues. J. Stephen Kowski, Field CTO at SlashNext Email Security+, added that despite the promise of improved threat response, baseline models have had mixed results, and adoption of Microsoft's Security Copilot has been slower than anticipated due to questions about data handling and costs.
[9]
Microsoft's AI Agents Aim to Make Cybersecurity Teams' Work Easier
If you peek behind the curtain at a network defender's workflow, you might see hundreds -- if not thousands -- of emails marked as potential spam or phishing. It can take hours to sift through the messages to detect the most urgent threats. When a data breach occurs, figuring out what vital information was stolen is a critical -- but often challenging -- step for investigators. Today, Microsoft announced a set of artificial intelligence agents aimed at making cybersecurity teams' work a little easier. That could be good news for the many businesses large and small that use Microsoft 365 for their email, cloud storage, and other services. Agentic AI is a buzzy new term for AI systems that can take actions on behalf of a human user. One step up from generative AI chatbots, AI agents promise to do actual work, such as executing code or performing web searches. OpenAI recently launched Deep Research mode for ChatGPT, which can conduct multi-step web searches to research complex topics or make shopping recommendations for major purchases. Google has been rolling out its own AI agents built off the latest version of Gemini. A year ago, Microsoft launched Security Copilot, which introduced AI tools to its suite of security products: Purview, Defender, Sentinel, Intune, and Entra. Starting in April, users can opt in to having AI agents do specific tasks for them.
[10]
Microsoft Debuts Security Copilot Agents: Five Big Things To Know
The tech giant is announcing six agentic offerings for its Microsoft Security Copilot platform, which is 'really taking automation to that next step' for security teams, Microsoft's Dorothy Li tells CRN. Microsoft announced a first set of AI agents for its Security Copilot platform Monday, beginning the next phase of the tech giant's effort to bring greater automation to overburdened security teams. Speaking with CRN, Dorothy Li, corporate vice president for Microsoft Security Copilot, said the launch of the six agentic offerings is aimed at "really taking automation to that next step" for security teams. [Related: Microsoft Boosts AI Systems Security With Hallucination Correction, Confidential Inferencing] Microsoft said that its new Security Copilot agents will be made available as a preview April 27. The move comes a year after Microsoft released its Security Copilot platform into general availability -- and at a time when interest in AI agents continues to surge as a potential next frontier for LLM technology. What follows are five big things to know about Microsoft's Security Copilot agents. Speaking to journalists last week in New York, Microsoft's Vasu Jakkal said that while the initial set of GenAI-powered capabilities for security teams has made a difference, it still doesn't go far enough in terms of automation. "Without the agent capability and the autonomous work that agents can do on behalf of humans, with human agency we cannot keep up with this tremendous volume of alerts and triage them," said Jakkal, corporate vice president for security, compliance, identity, management and privacy at Microsoft. The agentic expansion for Security Copilot will have an impact across Microsoft's full security portfolio -- consisting of threat protection (Defender and Sentinel), data governance and compliance (Purview), identity and access management (Entra) and device management (Intune). "We are integrating these Security Copilot agents into each of our products," Jakkal said. Ultimately, Microsoft's Security Copilot agents are a "natural evolution of a question-and-answer AI assistant -- in that it adds this intelligent, autonomous automation to security," Li told CRN. With millions of cybersecurity professional roles believed to be unfilled, "I've never met a customer who says, 'I'm right-staffed for my [Security Operations Center],'" she said. "Everyone's short-staffed." The potential advantage of agentic security capabilities, however, is "automate the repetitive, high-volume tasks," Li said. This can be as fundamental as helping to improve an organization's security hygiene and reduce the attack surface, she said -- to more advanced uses that enable security teams to "respond faster" when attacks do happen. All in all, Security Copilot agents can "really automate a lot of the repetitive tasks so the humans can focus on the strategic, truly critical work," Li said. The first of the agentic capabilities coming to Microsoft Defender is the Phishing Triage Agent, the company said. The agent will be available in the Microsoft Defender portal and will allow more automated and effective triaging of the massive number of phishing-related alerts that organizations are constantly dealing with, Jakkal said. Specifically, the Phishing Triage Agent will help security teams to address potential phishing attempts that have been submitted by users -- including with making a determination about whether the submission represents a genuine phishing attack or not, according to Microsoft. For Purview, Microsoft is unveiling Alert Triage Agents for both its Data Loss Prevention and Insider Risk Management tools. The Purview agents will "identify the alerts that pose the greatest risk to your organization and should be prioritized first" by analyzing content as well as the likely intent that triggered the alert, Li wrote in a blog post. Alerts will be categorized by the agents in part "based on the impact they have on sensitive data," she wrote. Meanwhile, the agents will also provide a "comprehensive explanation" to explain the categorization decisions, according to Li. Microsoft is unveiling additional agents in preview for Entra and Intune. The new Conditional Access Optimization Agent for Entra will automate the "detection and resolution of policy drift," Li wrote in the post, through continuous monitoring and analysis. The new Vulnerability Remediation Agent for Microsoft Intune, meanwhile, will automatically identify and evaluate Windows vulnerabilities while also providing prioritization for responses, according to Li. Additionally, Microsoft announced it is launching an agentic capability in Security Copilot that can automatically generate a curated threat intelligence report for security teams. The Threat Intelligence Briefing Agent uses information from Defender Threat Intelligence and Defender External Surface Management to "deliver prioritized reports in just 4-5 minutes," Li wrote in the post. Along with the six Microsoft agents for Security Copilot, the company also disclosed details about third-party agents being announced for the platform Monday. The five Security Copilot agents from third-party vendors debuting initially are the Privacy Breach Response agent from OneTrust; the Network Supervisor agent from Aviatrix; the SecOps Tooling Agent from BlueVoyant; the Alert Triage Agent from Tanium; and the Task Optimizer Agent from Fletch.
[11]
Satya Nadella Says 'Speed, Scale And Frequency' Of Cyberattacks Outpacing Capabilities Of Human Defenders As Microsoft Unveils AI Security Agents - Microsoft (NASDAQ:MSFT)
On Monday, Microsoft Corporation MSFT announced an expansion of its Security Copilot platform, introducing six new AI-powered agents aimed at helping organizations detect, prioritize, and respond to cyber threats more efficiently. What Happened: "The speed, scale, and frequency of cyberattacks are outpacing the capabilities of human defenders alone," said Microsoft CEO Satya Nadella on X. "Today we're expanding Security Copilot with security agents to help address routine security and IT tasks," he added. See Also: Apple Delays Next-Gen Siri: 'It's Going To Take Us Longer Than We Thought' To Deliver AI Upgrades These agents will be available in preview next month and are built to handle phishing alerts, data loss incidents, vulnerability monitoring, and more -- without requiring constant human intervention. Microsoft is also partnering with companies like OneTrust, Aviatrix, BlueVoyant, Tanium, and Fletch to add five more third-party AI security agents. "The six Microsoft Security Copilot agents enable teams to autonomously handle high-volume security and IT tasks while seamlessly integrating with Microsoft Security solutions," stated Vasu Jakkal, corporate vice president of Microsoft Security, in a blog post. Microsoft is expected to share more updates at its Secure event on April 9. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It's Important: According to an analysis by Goldman Sachs analyst Kash Rangan, Microsoft's focus on AI is seen as a compelling investment opportunity, with him reiterating a Buy rating and a $500 price target for the stock. In January, Microsoft posted quarterly GAAP earnings of $3.23 per share, surpassing the consensus estimate of $3.11. The company's revenue for the quarter reached $69.6 billion, exceeding analysts' expectations of $68.78 billion. Last year, Microsoft president Brad Smith called on the U.S. government to take a stronger approach to combating cyber threats from nations such as Russia, China, and Iran. Price Action: Microsoft's stock dipped 0.0067% in after-hours trading, settling at $393.05. During Monday's regular session, it closed at $393.08, marking a 0.47% gain for the day, according to Benzinga Pro data. Photo Courtesy: Shutterstock.com Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Tim Cook Praises China's DeepSeek For Driving Efficiency, Stresses Apple's 'Prudent And Deliberate' Approach Toward Capital Expenditure Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. MSFTMicrosoft Corp$393.050.46%Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full ScoreEdge RankingsMomentum31.37Growth64.68Quality35.89Value14.83Price TrendShortMediumLongOverviewMarket News and Data brought to you by Benzinga APIs
Share
Share
Copy Link
Microsoft introduces AI-powered security agents to assist overwhelmed cybersecurity teams, aiming to automate high-volume tasks and improve threat response times.
Microsoft has announced the launch of new AI-powered security agents for its Security Copilot program, designed to assist overwhelmed cybersecurity teams in combating the ever-increasing volume of digital threats. The company is introducing six of its own AI agents and five from partner companies, all set to be available for preview next month 1.
The six Microsoft-created agents are designed to handle high-volume security and IT tasks, integrating seamlessly with Microsoft's security solutions. These agents will:
Microsoft is collaborating with partners to expand the capabilities of Security Copilot:
The introduction of these AI agents comes at a critical time in cybersecurity:
Early results from Security Copilot implementation show promising improvements:
Vasu Jakkal, Corporate Vice President of Microsoft Security, emphasized the role of AI in addressing the cybersecurity talent shortage:
"There's just no way humans can scale to keep up with these challenges. In fact, we don't have the human talent in security right now," Jakkal stated 5.
While the AI agents promise improved threat response, some challenges remain:
As cyber threats continue to evolve and increase in sophistication, the role of AI in cybersecurity is expected to grow. Microsoft's initiative represents a significant step towards creating more robust, AI-enhanced security solutions to combat the rising tide of cyber attacks and protect organizations in an increasingly complex digital landscape.
Reference
[3]
Microsoft launches 10 new autonomous AI agents integrated into Dynamics 365, aiming to streamline workflows and enhance operational efficiency across critical business functions. This move positions Microsoft as a leader in enterprise AI solutions.
34 Sources
34 Sources
Microsoft introduces two new AI agents, Researcher and Analyst, to its 365 Copilot suite, enhancing deep research and data analysis capabilities for business users.
13 Sources
13 Sources
Microsoft announces the release of autonomous AI agents and Copilot Studio, enabling businesses to create custom AI assistants for task automation and productivity enhancement.
37 Sources
37 Sources
Microsoft introduces AI agents and updates to Copilot for Microsoft 365, aiming to boost adoption and productivity in the workplace. The new features include task delegation to AI agents and improved integration across Office applications.
2 Sources
2 Sources
Microsoft introduces a new consumption-based pricing model for its AI-powered Microsoft 365 Copilot Chat, offering businesses flexible access to AI agents and productivity tools.
13 Sources
13 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved