Eurostar accused security researchers of blackmail after AI chatbot vulnerability disclosure

2 Sources

Share

Security firm Pen Test Partners discovered four critical flaws in Eurostar's AI chatbot that could allow attackers to inject malicious content and leak system information. After reporting the vulnerabilities through official channels and receiving no response for weeks, the researchers were accused of blackmail by Eurostar's head of security. The incident highlights growing challenges in responsible disclosure as AI-powered customer interfaces become widespread.

Security Researchers Face Blackmail Accusation After Eurostar AI Chatbot Flaw Disclosure

Pen Test Partners, a U.K. security consulting firm, discovered four significant vulnerabilities in Eurostar's public AI chatbot during routine testing earlier this year

1

. The flaws could enable attackers to bypass safety measures through prompt injection, extract system prompts, and inject malicious HTML content into chatbot responses

2

. What should have been a straightforward vulnerability disclosure process turned into a cautionary tale about the challenges in responsible disclosure when Eurostar accused them of blackmail after the pen testers attempted to follow up on their ignored bug report

1

.

Source: SiliconANGLE

Source: SiliconANGLE

The discovery raises critical questions about security in AI-powered customer interfaces as companies across industries deploy chatbots without adequate security controls. While Eurostar's chatbot wasn't connected to sensitive customer data at the time, the vulnerabilities could become severe if the system expanded to handle bookings, personal information, or account access

2

.

How the Vulnerability Reporting Program Failed

Ross Donald, head of core pen test at Pen Test Partners, first reported the Eurostar AI flaws through the company's vulnerability disclosure program on June 11

1

. After receiving no response, he followed up on June 18—again, silence. On July 7, managing partner Ken Munro reached out to Eurostar's head of security via LinkedIn. A week later, they were told to use the vulnerability reporting program, which they had already done. By July 31, Eurostar claimed there was no record of their report

1

.

The confusion stemmed from Eurostar outsourcing its vulnerability disclosure program between the initial report and follow-up attempts. The company had launched a new disclosure form and retired the old one, potentially losing multiple disclosures during the transition

1

. This operational failure highlights how even well-intentioned security programs can break down without proper handoff procedures.

The Blackmail Accusation That Shocked Security Professionals

When Munro suggested in a LinkedIn message that "maybe a simple acknowledgement of the original email report would have helped," the Eurostar security executive responded with a statement that stunned the researchers: "Some might consider this to be blackmail"

1

. Donald expressed disbelief at the accusation, noting that blackmail requires a threat and no threat was made. "We don't work like that!" he wrote in a blog post detailing the incident

2

.

Source: The Register

Source: The Register

The blackmail accusation demonstrates a fundamental misunderstanding of ethical security research and the AI chatbot flaw disclosure process. Security professionals who identify vulnerabilities and report them through official channels are performing a service, not making threats. The incident serves as a warning to companies about the importance of training security staff to work cooperatively with researchers rather than viewing them as adversaries.

Technical Details: Manipulation of Chat History and Prompt Injection Attacks

The vulnerabilities stemmed from flawed design in the API-driven chatbot. Each time a user sent a message, the frontend relayed the entire chat history to the API, but guardrail checks only verified the latest message

1

. If that message passed safety checks, the server marked it "passed" and returned a signature. This created an exploitable gap: earlier messages could be tampered with on the user's screen and fed into the model as having passed guardrail checks.

By sending a legitimate message that passed verification—such as requesting a travel itinerary—attackers could edit previous messages in the chat history and trick the bot into leaking information through prompt injection

1

. The researchers successfully extracted system prompts and learned how the chatbot generated HTML for reference links, information that could facilitate future attacks.

HTML Injection and Cross-Site Scripting Risks

The chatbot proved vulnerable to HTML injection, enabling attackers to manipulate responses to include phishing websites or other malicious content that would appear as legitimate Eurostar information

1

. The backend also failed to verify conversation and message IDs, which combined with malicious content injection "strongly suggests a plausible path to stored or shared XSS," according to Donald

1

.

Stored cross-site scripting (XSS) attacks occur when malicious code is injected into a vulnerable field—in this case, the chat history—and the application treats it as legitimate, delivering it to other users as trusted content. This attack vector could hijack sessions, steal secrets, or redirect users to phishing websites

1

. While the current chatbot doesn't handle sensitive customer data, these flaws could become critical if functionality expands.

Uncertain Resolution and Broader Implications

Eurostar eventually found the original email containing the bug report and fixed "some" of the flaws, but uncertainty remains about whether all vulnerabilities were addressed

1

. Donald noted: "We still don't know if it was being investigated for a while before that, if it was tracked, how they fixed it, or if they even fully fixed every issue!"

2

The episode serves as a reminder that chatbot security extends beyond AI behavior to encompass the underlying software infrastructure. As AI-powered customer interfaces proliferate across industries, companies must build security controls from the start rather than bolting them on later

1

. The security sector functions best through cooperation, not through misguided accusations against researchers acting in good faith

2

. Organizations deploying AI chatbots should watch for similar architectural flaws in their own systems and establish clear, functional disclosure processes that don't disappear during vendor transitions.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo