10 Sources
[1]
Microsoft's AI Prototype Can Reverse Engineer Malware, No Human Needed
Microsoft says it's developed a prototype AI program that can reverse engineer malware, automating a task usually reserved for expert human security researchers. The prototype, dubbed Project Ire, was designed to tackle one of toughest assignments in security research: "Fully reverse engineering a software file without any clues about its origin or purpose," the company said in a Tuesday blog post. In one Microsoft test, Project Ire was able to correctly identify 90% of malicious Windows driver files. In addition, the AI program flagged only 2% of benign files as dangerous. "This low false-positive rate suggests clear potential for deployment in security operations, alongside expert reverse engineering reviews," the company says. Project Ire stands out from traditional antivirus engines, which often work by scanning files and programs for strings of computer code, known patterns, or certain behaviors, tied to past malware detections. The problem is hackers are constantly evolving their techniques to conceal malicious functions, making new attacks harder to catch. This might include using built-in functions in legitimate software to download malicious modules at a later time. The IT security industry has long tapped AI, such as machine learning, to improve malware detection. However, Microsoft's Project Ire joins other companies in leveraging large language models to investigate and flag potential security threats. "Project Ire attempts to address these challenges by acting as an autonomous system that uses specialized tools to reverse engineer software. The system's architecture allows for reasoning at multiple levels, from low-level binary analysis to control flow reconstruction and high-level interpretation of code behavior," Redmond added. In its blog post, Microsoft said the AI program was able to detect a Windows-based rootkit and another malware sample designed to deactivate antivirus by identifying their key features. Project Ire was also smart enough to "author a conviction case, a detection strong enough to justify automatic blocking," that triggered Microsoft to flag and block a malware sample tied to an elite hacking group. While the rise of AI has sparked concerns about machines replacing people, Microsoft is positioning Project Ire as a tool to assist overburdened security researchers and IT staff. The company plans on deploying the AI within the team that develops Microsoft Defender as a "Binary Analyzer for threat detection and software classification." "Our goal is to scale the system's speed and accuracy so that it can correctly classify files from any source, even on first encounter," the company added. Still, the AI program remains a prototype, possibly because it faces limitations. In another Microsoft test involving nearly 4,000 files slated for manual review, the company found Project Ire "achieved a high precision score of 0.89," meaning nearly 9 out of 10 files that were flagged as malicious were correctly identified. However, Project Ire appeared to only detect "roughly a quarter of all actual malware" within the scanned files. Still, Microsoft noted: "While overall performance was moderate, this combination of accuracy and a low error rate suggests real potential for future deployment."
[2]
Microsoft's AI agent only caught 26% of malware in a test
Project Ire promises to use LLMs to detect whether code is malicious or benign Microsoft has rolled out an autonomous AI agent that it claims can detect malware without human assistance. The prototype, called Project Ire, reverse engineers software "without any clues about its origin or purpose," and then determines if the code is malicious or benign, using large language models (LLM) and a bunch of callable reverse engineering and binary analysis tools. "It was the first reverse engineer at Microsoft, human or machine, to author a conviction case -- a detection strong enough to justify automatic blocking -- for a specific advanced persistent threat (APT) malware sample, which has since been identified and blocked by Microsoft Defender," Redmond claimed in a Tuesday blog post. If it performs as promised, and at scale, Project Ire will help relieve security analysts of the tedious work of manually analyzing every sample and classifying it as either good or bad. This can take hours, leading to alert fatigue and burnout, and it also means that there are fewer human eyes and brains focused on the really sophisticated and fast-moving threats that require immediate detection and blocking. But that's still a big if at this point. In a real-world test of about 4,000 "hard-target" files, meaning that they weren't classified by automated systems and would otherwise be manually reviewed by human reverse engineers, nearly 9 out of 10 files (89 percent) that Project Ire flagged as malicious were actually malicious. However, the AI agent only detected about a quarter (26 percent) of all the malware in this test. "While overall performance was moderate, this combination of accuracy and a low error rate suggests real potential for future deployment," the Microsoft security engineers wrote. The prototype will be integrated into Microsoft's Defender suite of security tools that encompass antivirus, endpoint, email, and cloud security as a binary analyzer for threat detection and software classification. "Our goal is to scale the system's speed and accuracy so that it can correctly classify files from any source, even on first encounter," according to Microsoft. "Ultimately, our vision is to detect novel malware directly in memory, at scale." AI-based malware analysis is not new, with antivirus vendors like Cylance using machine learning to analyze files for nearly a decade. However, "what we learned then and that can be applied now is that the best results for malware detection involve a combination of deterministic (like patterns and signatures), machine learning and probabilistic techniques (AI/GenAI) approaches," Gartner VP Neil MacDonald told The Register via email in response to questions about Project Ire. "That's why in this case, Microsoft highlighted its use in the SOC as far as an incident detection and response process rather than inline as a preventative control," he said. MacDonald did note the "relatively high percentage of false positives and false negatives documented in the paper show the limitations of this approach." Still, that's not to say that security companies shouldn't invest in AI, he added. "It is clear that, moving forward in a world where hackers will leverage AI for quickly creating new and novel attacks, this type of AI/GenAI-based approach will be critical to keeping up with the volume and variations of new threats," MacDonald said. "AI, in the hands of the defenders, will be necessary to offset the threat of AI in the hands of the attackers." Indeed, Microsoft's announcement comes as all of the big security companies double down on AI, especially AI agents -- both integrating them into their enterprise tools and also helping companies protect their data and people against the myriad threats that AI systems and agents introduce. While Redmond is arguably furthest along in this process of stuffing AI and task-specific agents into all of its security products, Google is also developing its own army of AI agents including one that analyzes malware and determines the extent of the threat it poses. The Chocolate Factory announced this malware analysis agent at its annual Cloud Next event, and at the time, said it would be available in preview for select Google customers this year. Late last month, Palo Alto Networks inked a $25-billion deal to buy Israeli biz CyberArk and bring the smaller firm's identity security tech, which not only verifies human identities but also machines and AIs, into its larger security platform. Machine identities outnumber those of humans by 40 to one, according to CyberArk, and this number is expected to skyrocket as more companies use AI agents. ®
[3]
Microsoft Launches Project Ire to Autonomously Classify Malware Using AI Tools
Microsoft on Tuesday announced an autonomous artificial intelligence (AI) agent that can analyze and classify software without assistance in an effort to advance malware detection efforts. The large language model (LLM)-powered autonomous malware classification system, currently a prototype, has been codenamed Project Ire by the tech giant. The system "automates what is considered the gold standard in malware classification: fully reverse engineering a software file without any clues about its origin or purpose," Microsoft said. "It uses decompilers and other tools, reviews their output, and determines whether the software is malicious or benign." Project Ire, per the Windows maker, is an effort to enable malware classification at scale, accelerate threat response, and reduce the manual efforts that analysts have to undertake in order to examine samples and determine if they are malicious or benign. Specifically, it uses specialized tools to reverse engineer software, conducting analysis at various levels, ranging from low-level binary analysis to control flow reconstruction and high-level interpretation of code behavior. "Its tool-use API enables the system to update its understanding of a file using a wide range of reverse engineering tools, including Microsoft memory analysis sandboxes based on Project Freta (opens in new tab), custom and open-source tools, documentation search, and multiple decompilers," Microsoft said. Project Freta is a Microsoft Research initiative that enables "discovery sweeps for undetected malware," such as rootkits and advanced malware, in memory snapshots of live Linux systems during memory audits. The evaluation is a multi-step process - * Automated reverse engineering tools identify the file type, its structure, and potential areas of interest * The system reconstructs the software's control flow graph using frameworks like angr and Ghidra * The LLM invokes specialized tools through an API to identify and summarize key functions * The system calls a validator tool to verify its findings against evidence used to reach the verdict and classify the artifact The summarization leaves a detailed "chain of evidence" log that details how the system arrived at its conclusion, allowing security teams to review and refine the process in case of a misclassification. In tests conducted by the Project Ire team on a dataset of publicly accessible Windows drivers, the classifier has been found to correctly flag 90% of all files and incorrectly identify only 2% of benign files as threats. A second evaluation of nearly 4,000 "hard-target" files rightly classified nearly 9 out of 10 malicious files as malicious, with a false positive rate of only 4%. "Based on these early successes, the Project Ire prototype will be leveraged inside Microsoft's Defender organization as Binary Analyzer for threat detection and software classification," Microsoft said. "Our goal is to scale the system's speed and accuracy so that it can correctly classify files from any source, even on first encounter. Ultimately, our vision is to detect novel malware directly in memory, at scale." The development comes as Microsoft said it awarded a record $17 million in bounty awards to 344 security researchers from 59 countries through its vulnerability reporting program in 2024. A total of 1,469 eligible vulnerability reports were submitted between July 2024 and June 2025, with the highest individual bounty reaching $200,000. Last year, the company paid $16.6 million in bounty awards to 343 security researchers from 55 countries.
[4]
Microsoft's new AI reverse-engineers malware autonomously, marking a shift in cybersecurity
Microsoft says it has created an advanced AI system that can reverse-engineer and identify malicious software on its own, without human assistance. The prototype system, called Project Ire, automatically dissects software files to understand how they work, what they do, and whether they're dangerous. This kind of deep analysis is typically performed by human security experts. Long-term, Microsoft says it hopes the AI will detect new types of malware directly in computer memory, helping to stop threats faster and on a larger scale. The system "automates what is considered the gold standard in malware classification: fully reverse engineering a software file without any clues about its origin or purpose," the company said in a blog post announcing the prototype. This differs from existing security tools that scan for known threats or match files to existing patterns. It comes as security defenders and hackers engage in an arms race to use emerging AI models and autonomous agents to their advantage. More broadly, Microsoft has described security as its top priority, and put security deputies into all of its product teams through its Secure Future Initiative, following a series of high-profile vulnerabilities in its software that have led to questions and frustration from business and government leaders. On Monday, the company launched its latest Zero Day Quest, a global bug bounty competition with up to $5 million in total rewards. The challenge invites security researchers to find vulnerabilities in Microsoft cloud and AI products, with the potential for bonus payouts in high-impact scenarios. Announcing Project Ire on Tuesday morning, Microsoft said it was accurate enough in one case to justify automatically blocking a highly advanced form of malware. It was the first time that any system at the company -- human or machine -- had produced a threat report strong enough to trigger an automatic block on its own. Microsoft says this kind of automation could eventually help protect billions of devices more effectively, as cyberattacks become more sophisticated. Early testing showed the AI to be very accurate: when it determined a file was malicious, it was correct 98% of the time, and it incorrectly flagged safe files as threats in 2% of cases, according to the company. It's part of a growing wave of AI systems aimed at defusing cybersecurity threats in new ways. Google's "Big Sleep" AI, for example, also operates autonomously but concentrates instead on discovering security vulnerabilities in code. Project Ire was developed by teams across Microsoft Research, Microsoft Defender, and Microsoft Discovery & Quantum, and will now be used internally to help speed up threat detection across Microsoft's security tools, according to the company.
[5]
Microsoft unveils AI agent that can autonomously detect malware
Why it matters: The tool is a breakthrough for cyber defenders, who spend hours studying and assessing suspicious files on their networks. Zoom in: Microsoft's new Project Ire can analyze and classify software "without assistance," according to a blog post published Tuesday. * That analysis and classification is the "gold standard" for malware detection, the blog adds. Context: Typical malware detection relies on a skilled analyst who can take a potentially tainted software file and pick it apart until they uncover its origins. * This can take hours and be taxing for analysts, who might have to dig through hundreds of files to see if they're malicious. * But automating this task is incredibly difficult: AI struggles to make nuanced judgment calls about a program's intent or maliciousness, especially when its behavior is ambiguous or dual use. Between the lines: Project Ire is combatting those limitations in a couple ways. * First, the agent is running on a system that has broken up malware analysis into different layers, meaning the tool is reasoning only in stages, rather than risking overload by trying to do everything at once. * Second, the tool is running on a wide range of tools, including sandboxes of Microsoft memory analysis, custom and open-source tools, documentation search, and multiple decompilers. The intrigue: During a real-world test of Project Ire on nearly 4,000 files flagged by Microsoft Defender, nearly 9 out of 10 files that the agent flagged as malicious were actually malicious. Yes, but: Project Ire caught only about a quarter of all malicious files on the system in the test. * "While overall performance was moderate, this combination of accuracy and a low error rate suggests real potential for future deployment," Microsoft noted in the post. The big picture: This is likely just the start of advancements of AI agents in cybersecurity. * Google started previewing a similar malware analysis agent earlier this year. What's next: Microsoft plans to integrate Project Ire into Microsoft Defender to help "scale the system's speed and accuracy."
[6]
Microsoft's new AI security tool can spot malware early - and even reverse engineer it to crack the code
Microsoft has introduced a new AI tool it says has the ability to meet the "gold standard" of malware detection, identification, and classification. While still only a working prototype, Project Ire has shown great promise in its ability to detect and reverse engineer malware without any context of the file's origin or purpose. Microsoft plans for Project Ire to be incorporated into Microsoft Defender as a 'Binary Analyzer' used to identify malware in memory from any source at first encounter. The tool is still very much in the early stages of development, but in Microsoft's own real-world scenario testing, Project Ire managed to detect almost 9 out of 10 malicious files correctly in precision tests, but only managed to detect just over one quarter of malware in recall tests. However, in these initial tests, there was a false positive rate of 4%. "While overall performance was moderate, this combination of accuracy and a low error rate suggests real potential for future deployment," Microsoft said in a blog post. Additionally, in this testing, the AI tool had no knowledge of nor had it faced any of the 4,000 files it scanned. The tool generates a report on each potentially malicious file it identifies, summarizing why certain parts of the file could indicate it as malware. In a separate test against a public dataset of a mix of legitimate and malicious Windows drivers the tool again detected 9 out of 10 malicious files correctly with a false positive rate of 2%. The recall rate was also significantly higher, scoring 0.83 in this test. Looking ahead, Microsoft will continue to work on improving Project Ire's ability to detect malware at scale rapidly and precisely, and hopefully include the AI within Microsoft Defender as a threat detection and software classification tool. Threat actors are increasingly leveraging AI tools to generate malicious files at scale, but cybersecurity organizations are also leveraging AI technology to fight back.
[7]
Microsoft unveils Project Ire, an autonomous AI agent that identifies malware at scale - SiliconANGLE
Microsoft unveils Project Ire, an autonomous AI agent that identifies malware at scale Microsoft Corp. introduced a new artificial intelligence agent on Tuesday that can analyze and classify malware in the wild at scale, without human intervention. The newly minted AI model, named Project Ire, can reverse engineer suspect software files and use forensic tools such as decompilers and binary analysis to deconstruct the code in order to determine if the file is hostile or safe. "It was the first reverse engineer at Microsoft, human or machine, to author a conviction case -- a detection strong enough to justify automatic blocking -- for a specific advanced persistent threat (APT) malware sample, which has since been identified and blocked by Microsoft Defender," the Ire research team said. According to the company, when tested against a public dataset of Windows drivers, Project Ire achieved a precision of 0.98 and a recall of 0.83. In terms of pattern recognition and detection, this is very good. It means the software can determine that a file is bad about 98% of the time without a false positive. It was also reasonably likely to find malware about 83% of the time when it casts a net. So, it catches most threats, but it might miss a few. Microsoft said its Defender platform, which is a suite of security tools that protects individuals and organizations from cyber threats, scans more than one billion devices monthly. This captures a constant stream of potential hostile files that must be routinely reviewed by experts. "This kind of work is challenging," the Ire team said. "Analysts often face error and alert fatigue, and there's no easy way to compare and standardize how different people review and classify threats over time." Human reviewers have the benefit of creativity and adaptability that software validation cannot easily replicate against malware, which AI applications struggle to match. Many validation processes in malware detection are vague and often require human review, particularly because malware authors implement reverse engineering protections and other obstacles to hinder straightforward detection. Project Ire uses advanced reasoning models to address these problems by stripping away these defenses using specialized tools like an engineer and autonomously evaluates their outputs as it iteratively attempts to classify the behavior of the software. "For each file it analyzes, Project Ire generates a report that includes an evidence section, summaries of all examined code functions, and other technical artifacts," the team said. These technical artifacts could include conclusions such as, "The binary contains several functions indicative of malicious intent," followed by direct evidence compiled from the forensic tools. For example, the agent might mention the inclusion of logging wrappers, targeted security process termination, anti-analysis behavior and more. In a real-world scenario involving 4,000 "hard target" files that had not been classified by automated systems and were pending expert review, the AI agent performed slightly worse than in controlled tests, yet still showed moderate effectiveness. According to Microsoft, it achieved a precision of 0.89, meaning 9 out of 10 files were correctly flagged as malicious. Its recall was 0.26, meaning that the system detected around a quarter of all actual malware that passed through its dragnet. It also had only a 4% false positive rate, which is when the software claims a safe file is malware. "While overall performance was moderate, this combination of accuracy and a low error rate suggests real potential for future deployment," the team said. The introduction of Project Ire follows the unveiling of autonomous agentic AI security software from technology giants such as Google LLC and Amazon.com Inc. Google's Big Sleep vulnerability discovery agent, launched last year, can proactively hunt for unknown software vulnerabilities. The company revealed last year that it identified a critical SQLite flaw based on data from the Google Threat Intelligence Group. Microsoft reported that initial tests of Project Ire have shown promise, and the prototype will be used within Defender's organization for threat detection and software classification. The goal will be to scale Ire's speed and accuracy so it can correctly identify files at the source, even upon first encounter with no prior reference, while in memory and at scale.
[8]
Microsoft's Project Ire identifies 90% of malicious drivers
The AI prototype offers deep binary analysis and behavioral reasoning, outperforming traditional antivirus methods in malware detection. Microsoft has developed Project Ire, an AI prototype that can autonomously reverse engineer software to identify malware, a task typically performed by human security researchers. The prototype can fully reverse engineer software without prior clues about its origin or purpose. In a Microsoft test, Project Ire accurately identified 90% of malicious Windows driver files, flagging only 2% of benign files as dangerous. Microsoft stated, "This low false-positive rate suggests clear potential for deployment in security operations, alongside expert reverse engineering reviews." Project Ire differs from traditional antivirus engines, which scan for known code strings, patterns, or behaviors. Hackers consistently evolve techniques to conceal malicious functions, making new attacks difficult to detect. Such techniques include using legitimate software functions to download malicious modules later. The IT security industry has previously used AI, including machine learning, for malware detection. Microsoft's Project Ire, however, utilizes large language models to investigate and flag security threats. Redmond added, "Project Ire attempts to address these challenges by acting as an autonomous system that uses specialized tools to reverse engineer software. The system's architecture allows for reasoning at multiple levels, from low-level binary analysis to control flow reconstruction and high-level interpretation of code behavior." Microsoft reported the AI program detected a Windows-based rootkit and another malware sample designed to deactivate antivirus by identifying key features. Project Ire was also capable of "author a conviction case, a detection strong enough to justify automatic blocking," which led Microsoft to flag and block a malware sample linked to an elite hacking group. Microsoft positions Project Ire as a tool to assist security researchers and IT staff. The company plans to deploy the AI within the team developing Microsoft Defender as a "Binary Analyzer for threat detection and software classification." Microsoft stated, "Our goal is to scale the system's speed and accuracy so that it can correctly classify files from any source, even on first encounter." The AI program remains a prototype with limitations. In another Microsoft test involving nearly 4,000 files scheduled for manual review, Project Ire achieved a high precision score of 0.89, indicating that nearly 9 out of 10 files flagged as malicious were correctly identified. However, Project Ire detected approximately one-quarter of all actual malware within the scanned files. Microsoft noted, "While overall performance was moderate, this combination of accuracy and a low error rate suggests real potential for future deployment."
[9]
Microsoft Says Its New AI Agent Prototype Can Autonomously Detect Malware
The AI agent has identified 90 percent of software correctly in a test Microsoft introduced a new artificial intelligence (AI) agent on Tuesday that can autonomously analyse and classify malware. Dubbed Project Ire, the AI system is currently available as a prototype, although the Redmond-based tech giant has tested its capabilities in controlled environments and in real-world scenarios. It can fully reverse engineer software without human intervention and conduct analysis at multiple levels to assess whether the software is benign or malware. The AI agent is said to have shown a high level of precision in a cybersecurity space where AI generally does not work independently. In a blog post, the tech giant detailed Project Ire and explained its capabilities. The agentic system was built as a result of collaboration between Microsoft Research, Defender Research, and Microsoft Discovery & Quantum divisions. The company says the agent is powered by several "advanced language models" and a suite of tools designed for binary analysis of software. Microsoft says that its Defender platform analyses more than one billion monthly active devices, which can be challenging for human analysts. However, so far the company has not opted for AI usage in this space, since reverse engineering software to detect malware is a complex process. Unlike other areas of cybersecurity, assigning software as malware (before it is deployed and executes a malicious action) requires making a judgment call. Software often comes with reverse engineering protections, which do not allow analysts to make a definitive assessment on whether the software is benign or malicious. Of course, there are workarounds, but they require investigating each sample incrementally, building evidence with each analysis, and validating the findings based on existing databases of software behaviours. As per Microsoft, Project Ire overcomes these complexities by leveraging specialised tools that allow the AI agent to reverse engineer software autonomously at different levels. These include low-level binary analysis, control flow reconstruction, and high-level code behaviour interpretation. When functioning, the prototype system first identifies the file type, structure, and potential areas of interest. After that, it reconstructs the control flow graph of the software using different frameworks. Then, it iteratively conducts function analysis to identify and summarise key functions. With each iteration, Project Ire also creates a detailed, auditable report highlighting the evidence it found. This evidence log can also be reviewed by human analysts and acts as a final line of defence in case of misclassification. The AI agent has also been equipped with a validator tool that can cross-check the evidence in the report against expert statements from malware reverse engineers that are working on the Project Ire team. Based on preliminary tests, Microsoft claims that Project Ire was able to correctly identify 90 percent of all files, and only flagged two percent of benign software as malware, achieving a precision of 0.98 and a recall of 0.83. Interestingly, the AI agent has also been tested in real-world scenarios. Microsoft asked it to review nearly 4,000 unclassified files. These files were claimed to be created after the agent's training cutoff; so it could not have learned about them from the training date. Operating fully autonomously, Project Ire achieved a precision score of 0.89, correctly identifying nine out of 10 files, the tech giant claimed. The false positive rate was claimed to be four percent. "Based on these early successes, the Project Ire prototype will be leveraged inside Microsoft's Defender organisation as Binary Analyzer for threat detection and software classification," the company said.
[10]
Microsoft introduces AI-powered malware detection tool: What is it and how it works
Detected 90% of threats in early tests, with low false positives and plans for Defender integration via the new Binary Analyser. Microsoft has introduced a new artificial intelligence system, Project Ire, that can autonomously detect, analyse, and block malware without any human intervention. Still in its prototype phase, the tool shows it could set a new "gold standard" for malware detection and analysis in early tests. Microsoft, in its blog post, shared that Project Ire is being developed in collaboration with Microsoft Research, Microsoft Defender Research, and Microsoft Discovery & Quantum. It uses advanced techniques like decompilation and control flow analysis to reverse-engineer software files, even when it has no prior knowledge of their origin or function. Microsoft aims to build this technology into Microsoft Defender as a new feature called Binary Analyser, which would spot dangerous files the moment they show up, even in memory. Microsoft claims that the Project Ire correctly detected 9 out of 10 malicious files in early real-world tests, which is promising. However, it only managed to catch about a quarter of all malware in recall tests. The false positive rate was around 4%, which is relatively low for an AI in early development. In another test using a mix of real and fake Windows drivers, the tool performed even better, detecting 90% of threats, but with a recall score of 0.83 and a lower false positive rate of just 2%. Also read: Google Pixel 10 Pro XL leaks: Design, specifications, price and more Project Ire generates reports on every suspicious file it flags, pointing out exactly which parts of the code raised red flags. This could help security teams respond more effectively. "This kind of work has traditionally been done manually by expert analysts, which can be slow and exhausting," Microsoft explained. Notably, security researchers often suffer from alert fatigue and burnout, making it difficult to maintain consistency across large-scale malware detection.
Share
Copy Link
Microsoft unveils Project Ire, an AI prototype capable of autonomously reverse-engineering and classifying malware without human assistance, potentially revolutionizing cybersecurity but facing accuracy challenges.
Microsoft has announced a groundbreaking artificial intelligence (AI) prototype called Project Ire, designed to autonomously reverse engineer and classify malware without human assistance. This innovative system aims to revolutionize cybersecurity by automating one of the most challenging tasks in the field: fully analyzing software files without prior knowledge of their origin or purpose 1.
Source: GeekWire
Project Ire utilizes large language models (LLMs) and a range of specialized tools to conduct multi-level analysis of software, from low-level binary examination to high-level interpretation of code behavior 3. The system employs:
This process creates a detailed "chain of evidence" log, allowing security teams to review and refine the system's decisions 3.
In Microsoft's tests, Project Ire demonstrated promising results:
However, the system currently detects only about 26% of all malware in scanned files, indicating room for improvement 2.
Source: SiliconANGLE
Project Ire represents a significant advancement in AI-driven cybersecurity:
The system has already demonstrated its capabilities by authoring a "conviction case" strong enough to justify automatic blocking of a malware sample linked to an advanced persistent threat (APT) 3.
Microsoft plans to integrate Project Ire into its Defender suite of security tools as a binary analyzer for threat detection and software classification 2. The company's long-term vision includes:
Source: TechRadar
While AI-based malware analysis is not new, experts believe that combining deterministic, machine learning, and probabilistic techniques will yield the best results for malware detection 2. As hackers increasingly leverage AI for creating new and sophisticated attacks, AI-powered defense systems like Project Ire will be crucial in maintaining cybersecurity 2.
Project Ire represents a significant step forward in AI-driven cybersecurity, offering the potential to revolutionize malware detection and analysis. While the system shows promise, its current limitations highlight the ongoing need for human expertise in the field. As Microsoft continues to refine and scale this technology, it could play a crucial role in defending against increasingly sophisticated cyber threats.
Cybersecurity researchers demonstrate a novel "promptware" attack that uses malicious Google Calendar invites to manipulate Gemini AI into controlling smart home devices, raising concerns about AI safety and real-world implications.
13 Sources
Technology
22 hrs ago
13 Sources
Technology
22 hrs ago
Google's search head Liz Reid responds to concerns about AI's impact on web traffic, asserting that AI features are driving more searches and higher quality clicks, despite conflicting third-party reports.
8 Sources
Technology
22 hrs ago
8 Sources
Technology
22 hrs ago
OpenAI has struck a deal with the US government to provide ChatGPT Enterprise to federal agencies for just $1 per agency for one year, marking a significant move in AI adoption within the government sector.
14 Sources
Technology
22 hrs ago
14 Sources
Technology
22 hrs ago
Microsoft announces the integration of OpenAI's newly released GPT-5 model across its Copilot ecosystem, including Microsoft 365, GitHub, and Azure AI. The update promises enhanced AI capabilities for users and developers.
3 Sources
Technology
6 hrs ago
3 Sources
Technology
6 hrs ago
Google has officially launched its AI coding agent Jules, powered by Gemini 2.5 Pro, offering asynchronous coding assistance with new features and tiered pricing plans.
10 Sources
Technology
22 hrs ago
10 Sources
Technology
22 hrs ago