2 Sources
2 Sources
[1]
Pen testers accused of 'blackmail' over Eurostar AI flaws
Researchers at Pen Test Partners found four flaws in Eurostar's public AI chatbot that, among other security issues, could allow an attacker to inject malicious HTML content or trick the bot into leaking system prompts. Their thank you from the company: being accused of "blackmail." The researchers reported the weaknesses to the high-speed rail service through its vulnerability disclosure program. While Eurostar ultimately patched some of the issues, during the responsible disclosure process, the train operator's head of security allegedly accused the pen-testing team of blackmail. Here's what happened, according to a blog published this week by the penetration testing and security consulting firm. After initially reporting the security issues - and not receiving any response - via a vulnerability disclosure program email on June 11, the bug hunter Ross Donald says he followed up with Eurostar on June 18. Still no response. So on July 7, managing partner Ken Munro contacted Eurostar's head of security on LinkedIn. About a week later, he was told to use the vulnerability reporting program (they had), and on July 31 learned there was no record of their bug report. "What transpired is that Eurostar had outsourced their VDP between our initial disclosure and hard chase," Donald wrote. "They had launched a new page with a disclosure form and retired the old one. It raises the question of how many disclosures were lost during this process." Eventually, Eurostar found the original email containing the report, fixed "some" of the flaws, and so Pen Test Partners decided to proceed with publishing the blog. But in the LinkedIn back-and-forth, Munro says: "Maybe a simple acknowledgement of the original email report would have helped?" And then, per a LinkedIn screenshot with Eurostar exec's name and photo blacked out, the security boss replied: "Some might consider this to be blackmail." The Register contacted Eurostar about this exchange, and asked whether it had fixed all of the chatbot's issues detailed in the blog. We did not receive an immediate response, but we will update this story if and when we hear back from the train operator. The flaws themselves are relatively easy to abuse and stem from the API-driven chatbot's design. Every time a user sends a message to the chatbot, the frontend relays the entire chat history - not just the latest message - to the API. But it only runs a guardrail check on the latest message to ensure that it's allowed. If that message is allowed, the server marks it "passed" and returns a signature. If the message doesn't pass the safety checks, however, the server responds with "I apologise, but I can't assist with that specific request" and no signature. Because the chatbot only verifies the latest message's signature, earlier messages can be tampered with on the user's screen, and then fed into the model as having passed the safety checks. As long as the user sends a legitimate, harmless message - such as asking the bot to build a travel itinerary - that passes the guardrail checks and returns a valid signature, they can then edit earlier messages in the chat history and trick the bot into leaking information it should not via prompt injection. Further prompt injection allowed the researcher to extract the system prompt and disclosed how the chatbot generated the HTML for its reference links. "That alone is reputationally awkward and can make future attacks easier, but the bigger risk is what happens once the chatbot is allowed to touch personal data or account details," Donald wrote. From there, with more poking, the chatbot revealed that it was vulnerable to HTML injection, which could be abused to trick the model into returning a phishing link or other malicious code inside what looks like a real Eurostar answer. Additionally, the backend didn't verify conversation and message IDs. This, combined with HTML injection, "strongly suggests a plausible path to stored or shared XSS," according to the researcher. Stored XSS, or cross-site scripting, occurs when an attacker injects malicious code into a vulnerable field - in this case, the chat history - and the application treats it as legitimate, delivering it to other users as trusted content and causing their browsers to execute the code. This type of attack is often used to hijack sessions, steal secrets, or send unwitting users to phishing websites. The pen testers say that they don't know if Eurostar fully fixed all of these security flaws. We've asked Eurostar about this and will report back when we receive a response. In the meantime, this should serve as a cautionary tale for companies with consumer-facing chatbots (and, these days, that's just about all of them) to build security controls in from the start. ®
[2]
Researchers say Eurostar accused them of blackmail over AI chatbot flaw disclosure - SiliconANGLE
Researchers say Eurostar accused them of blackmail over AI chatbot flaw disclosure Eurostar International Ltd., the operator of the Eurostar trains that cross the English Channel, has been accused of mishandling the responsible disclosure of security flaws in its customer-facing artificial intelligence chatbot after security researchers were allegedly told their actions could be viewed as blackmail. The allegation comes from U.K. security firm Pen Test Partners LLP, which said it identified multiple vulnerabilities in Eurostar's AI-powered chatbot earlier this year. The vulnerabilities were discovered during routine testing rather than as part of a commissioned engagement. The vulnerabilities detected included weaknesses in how the chatbot handled conversation history and message validation that could allow attackers to manipulate earlier messages in a chat session. The Pen Test Partners researchers were able to bypass safety guardrails, extract internal system information and inject arbitrary HTML into chatbot responses. While the chatbot was not connected to sensitive customer data, the firm warned that such flaws could become more serious if the system were later expanded to handle bookings, personal information, or account access. Being a legitimate company that practices ethical disclosure, Pen Test Partners attempted to report the issues through Eurostar's vulnerability disclosure process beginning in mid-June. After receiving no response, they followed up multiple times via email and later through LinkedIn, but then it gets weird. According to Pen Test Partners, a Eurostar security executive eventually responded but suggested that continued attempts to draw attention to the issue could be interpreted as blackmail. "To say we were surprised and confused by this has to be a huge understatement - we had disclosed a vulnerability in good faith, were ignored, so escalated via LinkedIn private message," explains Ross Donald, head of core pent test at Pen Test Partners, in a blog post. "I think the definition of blackmail requires a threat to be made and there was of course no threat. We don't work like that!" Eurostar later acknowledged that the original disclosure email had been overlooked and said some of the reported issues were subsequently addressed. Exactly what it fixed is unclear, however. "We still don't know if it was being investigated for a while before that, if it was tracked, how they fixed it, or if they even fully fixed every issue!," added Donald. As AI-powered customer interfaces become more widespread across industries, the Eurostar episode serves as a reminder that chatbot security is not just about AI behavior but also about the underlying software infrastructure that supports it. The case also highlights the need to have trained staff who are willing to work with security professionals instead of erroneously accusing them of wrongdoing. The security sector works best with cooperation, not with clown-level false threats of illegality.
Share
Share
Copy Link
Security firm Pen Test Partners discovered four critical flaws in Eurostar's AI chatbot that could allow attackers to inject malicious content and leak system information. After reporting the vulnerabilities through official channels and receiving no response for weeks, the researchers were accused of blackmail by Eurostar's head of security. The incident highlights growing challenges in responsible disclosure as AI-powered customer interfaces become widespread.
Pen Test Partners, a U.K. security consulting firm, discovered four significant vulnerabilities in Eurostar's public AI chatbot during routine testing earlier this year
1
. The flaws could enable attackers to bypass safety measures through prompt injection, extract system prompts, and inject malicious HTML content into chatbot responses2
. What should have been a straightforward vulnerability disclosure process turned into a cautionary tale about the challenges in responsible disclosure when Eurostar accused them of blackmail after the pen testers attempted to follow up on their ignored bug report1
.
Source: SiliconANGLE
The discovery raises critical questions about security in AI-powered customer interfaces as companies across industries deploy chatbots without adequate security controls. While Eurostar's chatbot wasn't connected to sensitive customer data at the time, the vulnerabilities could become severe if the system expanded to handle bookings, personal information, or account access
2
.Ross Donald, head of core pen test at Pen Test Partners, first reported the Eurostar AI flaws through the company's vulnerability disclosure program on June 11
1
. After receiving no response, he followed up on June 18—again, silence. On July 7, managing partner Ken Munro reached out to Eurostar's head of security via LinkedIn. A week later, they were told to use the vulnerability reporting program, which they had already done. By July 31, Eurostar claimed there was no record of their report1
.The confusion stemmed from Eurostar outsourcing its vulnerability disclosure program between the initial report and follow-up attempts. The company had launched a new disclosure form and retired the old one, potentially losing multiple disclosures during the transition
1
. This operational failure highlights how even well-intentioned security programs can break down without proper handoff procedures.When Munro suggested in a LinkedIn message that "maybe a simple acknowledgement of the original email report would have helped," the Eurostar security executive responded with a statement that stunned the researchers: "Some might consider this to be blackmail"
1
. Donald expressed disbelief at the accusation, noting that blackmail requires a threat and no threat was made. "We don't work like that!" he wrote in a blog post detailing the incident2
.
Source: The Register
The blackmail accusation demonstrates a fundamental misunderstanding of ethical security research and the AI chatbot flaw disclosure process. Security professionals who identify vulnerabilities and report them through official channels are performing a service, not making threats. The incident serves as a warning to companies about the importance of training security staff to work cooperatively with researchers rather than viewing them as adversaries.
The vulnerabilities stemmed from flawed design in the API-driven chatbot. Each time a user sent a message, the frontend relayed the entire chat history to the API, but guardrail checks only verified the latest message
1
. If that message passed safety checks, the server marked it "passed" and returned a signature. This created an exploitable gap: earlier messages could be tampered with on the user's screen and fed into the model as having passed guardrail checks.By sending a legitimate message that passed verification—such as requesting a travel itinerary—attackers could edit previous messages in the chat history and trick the bot into leaking information through prompt injection
1
. The researchers successfully extracted system prompts and learned how the chatbot generated HTML for reference links, information that could facilitate future attacks.Related Stories
The chatbot proved vulnerable to HTML injection, enabling attackers to manipulate responses to include phishing websites or other malicious content that would appear as legitimate Eurostar information
1
. The backend also failed to verify conversation and message IDs, which combined with malicious content injection "strongly suggests a plausible path to stored or shared XSS," according to Donald1
.Stored cross-site scripting (XSS) attacks occur when malicious code is injected into a vulnerable field—in this case, the chat history—and the application treats it as legitimate, delivering it to other users as trusted content. This attack vector could hijack sessions, steal secrets, or redirect users to phishing websites
1
. While the current chatbot doesn't handle sensitive customer data, these flaws could become critical if functionality expands.Eurostar eventually found the original email containing the bug report and fixed "some" of the flaws, but uncertainty remains about whether all vulnerabilities were addressed
1
. Donald noted: "We still don't know if it was being investigated for a while before that, if it was tracked, how they fixed it, or if they even fully fixed every issue!"2
The episode serves as a reminder that chatbot security extends beyond AI behavior to encompass the underlying software infrastructure. As AI-powered customer interfaces proliferate across industries, companies must build security controls from the start rather than bolting them on later
1
. The security sector functions best through cooperation, not through misguided accusations against researchers acting in good faith2
. Organizations deploying AI chatbots should watch for similar architectural flaws in their own systems and establish clear, functional disclosure processes that don't disappear during vendor transitions.Summarized by
Navi
[1]
16 Jul 2025•Technology

18 Oct 2024•Technology

08 Aug 2025•Technology

1
Business and Economy

2
Policy and Regulation

3
Technology
