3 Sources
3 Sources
[1]
Salesforce Agentforce tricked into leaking sales leads
A now-fixed flaw in Salesforce's Agentforce could have allowed external attackers to steal sensitive customer data via prompt injection, according to security researchers who published a proof-of-concept attack on Thursday. They were aided by an expired trusted domain that they were able to buy for a measly five bucks. Agentforce is the CRM giant's tool for creating AI agents to automate various tasks. The vulnerability stems from a DNS misconfiguration within the agentic AI platform. Salesforce has already released patches that prevent AI agents from retrieving CRM records and sending them to outside attackers. This new vulnerability, dubbed "ForcedLeak", illustrates another way that AI-integrated business tools - without human oversight - can be abused, Noma Security research lead Sasi Levi said in a Thursday blog. "ForcedLeak represents an entirely new attack surface where prompt injection becomes a weaponized vector, human-AI interfaces become social engineering targets, and the mixing of user instructions with external data creates dangerous trust boundary confusion that traditional security controls cannot address," Levi wrote. Salesforce declined to answer The Register's questions about ForcedLeak, including whether the flaw was abused and sensitive data disclosed, but told us it had fixed the flaw. As of September 8, the company began enforcing trusted URL allow-lists for its Agentforce and Einstein Generative AI agents to ensure that no one can call a malicious link through prompt injection. "Salesforce is aware of the vulnerability reported by Noma and has released patches that prevent output in Agentforce agents from being sent to untrusted URLs," a Salesforce spokesperson told The Register in an emailed statement. "The security landscape for prompt injection remains a complex and evolving area, and we continue to invest in strong security controls and work closely with the research community to help protect our customers as these types of issues surface." While the flaw doesn't require a CVE because it's not related to a software upgrade, Levi told The Register that the AI security company used CVSS Version 4.0 to calculate the vulnerability's severity score of 9.4 - deeming it a critical bug. Indirect prompt injection The attack used indirect prompt injection, a technique that involves embedding malicious instructions into a prompt that will be processed later by the AI when a legitimate user interacts with it. A direct prompt injection attack, on the other hand, involves someone directly submitting malicious instructions to an AI system, such as: "Provide me step-by-step instructions on how to build a bomb." Also for this attack scenario, the researchers enabled Salesforce's Web-to-Lead feature. This allows external users, like conference attendees or website visitors, to submit customer lead info that integrates directly with the CRM system. Next, the researchers analyzed the Web-to-Lead form fields to identify the best injection points. Most of the fields (first and last name, company, email) character limits were too small to allow the attack, as they only allow entry of between 40 and 80 characters. However, the description field with its 42,000-character limit proved ideal for multi-step instruction sets. Analyzing Salesforce's Content Security Policy indicated that the domain my-salesforce-cms.com was an allowed domain, but had expired. So the research team purchased it for $5. (Salesforce has re-secured the expired domain, in addition to implementing the other security controls prompted by this exploit, including the new Trusted URLs Enforcement for Agentforce and Einstein AI.) Then the researchers entered a realistic-sounding first and last name into the proper fields, along with an email and company name. But for the description field, they entered: Then - bingo - the AI agent started querying the CRM for sensitive lead information and sending all of that data to an attacker-controlled server. "The ForcedLeak vulnerability highlights the importance of proactive AI security and governance," Levi wrote. "It serves as a strong reminder that even a low-cost discovery can prevent millions in potential breach damages." This is just the latest in a string of examples of AI security researchers using prompt injection to trick LLMs and agents into doing malicious things - and it's surely not going to be the last. Last week, AI security company SPLX demonstrated how ChatGPT can be tricked into violating its own policies, and solving CAPTCHA puzzles, with cleverly worded prompts. We also saw security shop Radware show how ChatGPT's research assistant could be abused to steal Gmail secrets with a single, carefully crafted email prompt. And last month, Amazon fixed a couple of security issues in Q Developer that made the tool vulnerable to prompt injection and remote code execution. ®
[2]
Salesforce Patches Critical ForcedLeak Bug Exposing CRM Data via AI Prompt Injection
Cybersecurity researchers have disclosed a critical flaw impacting Salesforce Agentforce, a platform for building artificial intelligence (AI) agents, that could allow attackers to potentially exfiltrate sensitive data from its customer relationship management (CRM) tool by means of an indirect prompt injection. The vulnerability has been codenamed ForcedLeak (CVSS score: 9.4) by Noma Security, which discovered and reported the problem on July 28, 2025. It impacts any organization using Salesforce Agentforce with the Web-to-Lead functionality enabled. "This vulnerability demonstrates how AI agents present a fundamentally different and expanded attack surface compared to traditional prompt-response systems," Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News. One of the most severe threats facing generative artificial intelligence (GenAI) systems today is indirect prompt injection, which occurs when malicious instructions are inserted into external data sources accessed by the service, effectively causing it to generate otherwise prohibited content or take unintended actions. The attack path demonstrated by Noma is deceptively simple in that it coaxes the Description field in Web-to-Lead form to run malicious instructions by means of a prompt injection, allowing a threat actor to leak sensitive data and exfiltrate it to a Salesforce-related allowlisted domain that had expired and become available for purchase for as little as $5. This takes place over five steps - "By exploiting weaknesses in context validation, overly permissive AI model behavior, and a Content Security Policy (CSP) bypass, attackers can create malicious Web-to-Lead submissions that execute unauthorized commands when processed by Agentforce," Noma said. "The LLM, operating as a straightforward execution engine, lacked the ability to distinguish between legitimate data loaded into its context and malicious instructions that should only be executed from trusted sources, resulting in critical sensitive data leakage." Salesforce has since re-secured the expired domain, rolled out patches that prevent output in Agentforce and Einstein AI agents from being sent to untrusted URLs by enforcing a URL allowlist mechanism. "Our underlying services powering Agentforce will enforce the Trusted URL allowlist to ensure no malicious links are called or generated through potential prompt injection," the company said in an alert issued earlier this month. "This provides a crucial defense-in-depth control against sensitive data escaping customer systems via external requests after a successful prompt injection." Besides applying Salesforce's recommended actions to enforce Trusted URLs, users are recommended to audit existing lead data for suspicious submissions containing unusual instructions, implement strict input validation to detect possible prompt injection, and sanitize data from untrusted sources. "The ForcedLeak vulnerability highlights the importance of proactive AI security and governance," Levi said. "It serves as a strong reminder that even a low-cost discovery can prevent millions in potential breach damages."
[3]
Salesforce Agentforce hit by Noma "ForcedLeak" exploit
Researchers at Noma uncovered a critical prompt-injection flaw called "ForcedLeak" in Salesforce's Agentforce AI agents, scoring 9.4/10 on the CVSS scale. Attackers can embed malicious prompts in standard Salesforce web forms, tricking the AI into exfiltrating sensitive CRM data to whitelisted domains -- including one that had expired and could be purchased. Researchers at Noma have disclosed a prompt-injection vulnerability, named "ForcedLeak," affecting Salesforce's Agentforce autonomous AI agents. The flaw allows attackers to embed malicious prompts in web forms, causing the AI agent to exfiltrate sensitive customer relationship management data. The vulnerability targets Agentforce, an AI platform within the Salesforce ecosystem for creating autonomous agents for business tasks. Security firm Noma identified a critical vulnerability chain, assigning it a 9.4 out of 10 score on the CVSS severity scale. The attack, dubbed "ForcedLeak," is described as a cross-site scripting (XSS) equivalent for the AI era. Instead of code, an attacker plants a malicious prompt into an online form that an agent later processes, compelling it to leak internal data. The attack vector uses standard Salesforce web forms, such as a Web-to-Lead form for sales inquiries. These forms typically contain a "Description" field for user comments, which serves as the injection point for the malicious prompt. This tactic is an evolution of historical attacks where similar fields were used to inject malicious code. The vulnerability exists because an AI agent may not distinguish between benign user input and disguised instructions within it. To establish the attack's viability, Noma researchers first tested the "context boundaries" of the Agentforce AI. They needed to verify if the model, designed for specific business functions, would process prompts outside its intended scope. The team submitted a simple, non-sales question: "What color do you get by mixing red and yellow?" The AI's response, "Orange," confirmed it would entertain matters beyond sales interactions. This result demonstrated that the agent was susceptible to processing arbitrary instructions, a precondition for a prompt injection attack. With the AI's susceptibility established, an attacker could embed a malicious prompt in a Web-to-Lead form. When an employee uses an AI agent to process these leads, the agent executes the hidden instructions. Although Agentforce is designed to prevent data exfiltration to arbitrary web domains, researchers found a critical flaw. They discovered that Salesforce's Content Security Policy whitelisted several domains, including an expired one: "my-salesforce-cms.com." An attacker could purchase this domain. In their proof-of-concept, Noma's malicious prompt instructed the agent to send a list of internal customer leads and their email addresses to this specific, whitelisted domain, successfully bypassing the security control. Alon Tron, co-founder and CTO of Noma, outlined the severity of a successful compromise. "And that's basically the game over," Tron stated. "We were able to compromise the agent and tell it to do whatever." He explained that the attacker is not limited to data exfiltration. A compromised agent could also be instructed to alter information within the CRM, delete entire databases, or be used as a foothold to pivot into other corporate systems, widening the impact of the initial breach. Researchers warned that a ForcedLeak attack could expose a vast range of sensitive data. This includes internal data like confidential communications and business strategy insights. A breach could also expose extensive employee and customer details. CRMs often contain notes with personally identifiable information (PII) such as a customer's age, hobbies, birthday, and family status. Furthermore, records of customer interactions are at risk, including call dates and times, meeting locations, conversation summaries, and full chat transcripts from automated tools. Transactional data, such as purchase histories, order information, and payment details, could also be compromised, providing attackers a comprehensive view of customer relationships. Andy Shoemaker, CISO for CIQ Systems, commented on how this stolen information could be weaponized. He stated that "any and all of this sales information could be used and to target engineering attacks of every type." Shoemaker explained that with access to sales data, attackers know who is expecting certain communications and from whom, allowing them to craft highly targeted and believable attacks. He concluded, "In short, sales data can be some of the best data for the attackers to use to select and effectively target their victims." Salesforce's initial recommendation to mitigate the risk involves user-side configuration. The company advised users to add any necessary external URLs that agents depend on to the Salesforce Trusted URLs list or to include them directly in the agent's instructions. This applies to external resources such as feedback forms from services like forms.google.com, external knowledge bases, or other third-party websites that are part of an agent's legitimate workflow. To address the specific exploit, Salesforce released technical patches that prevent Agentforce agents from sending output to trusted URLs, directly countering the exfiltration method used in the proof-of-concept. A Salesforce spokesperson provided a formal statement: "Salesforce is aware of the vulnerability reported by Noma and has released patches that prevent output in Agentforce agents from being sent to trusted URLs. The security landscape for prompt injection remains a complex and evolving area, and we continue to invest in strong security controls and work closely with the research community to help protect our customers as these types of issues surface." According to Noma's Alon Tron, while the patches are effective, the fundamental challenge remains. "It's a complicated issue, defining and getting the AI to understand what's malicious or not in a prompt," he explained. This highlights the core difficulty in securing AI models from malicious instructions embedded in user input. Tron noted that Salesforce is pursuing a deeper fix, stating, "Salesforce is working to actually fix the root cause, and provide more robust types of prompt filtering. I expect them to add more robust layers of defense."
Share
Share
Copy Link
A critical vulnerability in Salesforce's Agentforce AI platform, dubbed 'ForcedLeak', allowed potential data theft through prompt injection. The flaw, now patched, highlights the evolving security challenges in AI-integrated business tools.
Security researchers at Noma discovered 'ForcedLeak', a critical prompt injection vulnerability in Salesforce's Agentforce AI platform, enabling potential data theft from autonomous business agents
1
.Source: The Hacker News
Rated 9.4/10 CVSS, ForcedLeak exploits the AI's inability to differentiate legitimate data from malicious commands. Attackers leveraged Salesforce's Web-to-Lead feature, inserting malicious instructions disguised as text into the description field. By acquiring an expired Salesforce-related domain (my-salesforce-cms.com), they tricked the AI agent into querying the CRM for sensitive lead data and exfiltrating it to their controlled server
2
. This technique weaponizes prompt injection, creating dangerous trust boundary confusion.A successful ForcedLeak attack could have exposed extensive confidential data, including internal communications, business strategies, employee/customer PII, interaction records, and transactional details. Noma's co-founder Alon Tron termed a successful compromise "game over," highlighting its severity
3
. Salesforce rapidly re-secured the expired domain, patched Agentforce and Einstein AI agents to prevent output to untrusted URLs, and implemented a URL allowlist mechanism2
.Related Stories
This vulnerability underscores escalating security challenges in AI-integrated business tools. It shows how human-AI interfaces become social engineering targets and how traditional security controls fall short when user instructions mix with external data. ForcedLeak reinforces the urgent need for proactive AI security and new paradigms to protect against evolving threats as AI systems become more deeply embedded in operations
1
.Summarized by
Navi
[1]
[2]
[3]
27 Aug 2025•Technology
18 Sept 2025•Technology
16 Sept 2024