New AI Attack 'Imprompter' Covertly Extracts Personal Data from Chatbot Conversations

3 Sources

Share

Security researchers have developed a new attack method called 'Imprompter' that can secretly instruct AI chatbots to gather and transmit users' personal information to attackers, raising concerns about the security of AI systems.

News article

Researchers Uncover New AI Vulnerability: 'Imprompter' Attack

Security researchers from the University of California, San Diego (UCSD) and Nanyang Technological University in Singapore have unveiled a new attack method targeting AI chatbots, raising significant concerns about the security of personal information shared during conversations with large language models (LLMs)

1

.

How Imprompter Works

The attack, dubbed 'Imprompter,' uses an algorithm to transform a malicious prompt into a seemingly random string of characters. This obfuscated prompt instructs the LLM to:

  1. Extract personal information from user inputs
  2. Attach the data to a URL
  3. Quietly send it to the attacker's domain

All of this occurs without alerting the user, effectively hiding the attack "in plain sight"

1

.

Successful Tests on Major LLMs

The researchers tested Imprompter on two prominent LLMs:

  1. LeChat by Mistral AI (France)
  2. ChatGLM (China)

In both cases, they achieved a nearly 80% success rate in extracting personal information from test conversations

2

.

Types of Data at Risk

The attack can potentially extract a wide range of personal information, including:

  • Names
  • ID numbers
  • Payment card details
  • Email addresses
  • Mailing addresses

This comprehensive data collection makes Imprompter a significant threat to user privacy

3

.

Broader Implications for AI Security

Imprompter is part of a growing trend of security vulnerabilities in AI systems. Since the release of ChatGPT in late 2022, researchers and hackers have consistently found security holes, primarily falling into two categories:

  1. Jailbreaks: Tricking AI systems into ignoring built-in safety rules
  2. Prompt injections: Feeding LLMs external instructions to manipulate their behavior

As AI becomes more integrated into everyday tasks, the potential impact of such attacks grows. Dan McInerney from Protect AI warns, "Releasing an LLM agent that accepts arbitrary user input should be considered a high-risk activity"

2

.

Industry Response and Mitigation

Mistral AI has reportedly fixed the vulnerability by disabling a specific chat functionality, as confirmed by the researchers. ChatGLM acknowledged the importance of security but did not directly comment on the vulnerability

1

.

User Precautions

In light of this discovery, users are advised to exercise caution when sharing personal information in AI chats. The convenience of AI assistance must be weighed against the potential risks to personal data security

3

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo