New AI Attack 'Imprompter' Covertly Extracts Personal Data from Chatbot Conversations

Curated by THEOUTPOST

On Fri, 18 Oct, 12:03 AM UTC

3 Sources

Share

Security researchers have developed a new attack method called 'Imprompter' that can secretly instruct AI chatbots to gather and transmit users' personal information to attackers, raising concerns about the security of AI systems.

Researchers Uncover New AI Vulnerability: 'Imprompter' Attack

Security researchers from the University of California, San Diego (UCSD) and Nanyang Technological University in Singapore have unveiled a new attack method targeting AI chatbots, raising significant concerns about the security of personal information shared during conversations with large language models (LLMs) 1.

How Imprompter Works

The attack, dubbed 'Imprompter,' uses an algorithm to transform a malicious prompt into a seemingly random string of characters. This obfuscated prompt instructs the LLM to:

  1. Extract personal information from user inputs
  2. Attach the data to a URL
  3. Quietly send it to the attacker's domain

All of this occurs without alerting the user, effectively hiding the attack "in plain sight" 1.

Successful Tests on Major LLMs

The researchers tested Imprompter on two prominent LLMs:

  1. LeChat by Mistral AI (France)
  2. ChatGLM (China)

In both cases, they achieved a nearly 80% success rate in extracting personal information from test conversations 2.

Types of Data at Risk

The attack can potentially extract a wide range of personal information, including:

  • Names
  • ID numbers
  • Payment card details
  • Email addresses
  • Mailing addresses

This comprehensive data collection makes Imprompter a significant threat to user privacy 3.

Broader Implications for AI Security

Imprompter is part of a growing trend of security vulnerabilities in AI systems. Since the release of ChatGPT in late 2022, researchers and hackers have consistently found security holes, primarily falling into two categories:

  1. Jailbreaks: Tricking AI systems into ignoring built-in safety rules
  2. Prompt injections: Feeding LLMs external instructions to manipulate their behavior

As AI becomes more integrated into everyday tasks, the potential impact of such attacks grows. Dan McInerney from Protect AI warns, "Releasing an LLM agent that accepts arbitrary user input should be considered a high-risk activity" 2.

Industry Response and Mitigation

Mistral AI has reportedly fixed the vulnerability by disabling a specific chat functionality, as confirmed by the researchers. ChatGLM acknowledged the importance of security but did not directly comment on the vulnerability 1.

User Precautions

In light of this discovery, users are advised to exercise caution when sharing personal information in AI chats. The convenience of AI assistance must be weighed against the potential risks to personal data security 3.

Continue Reading
ChatGPT macOS Vulnerability: Long-Term Data Exfiltration

ChatGPT macOS Vulnerability: Long-Term Data Exfiltration Risk Discovered

A critical vulnerability in ChatGPT's macOS app could have allowed hackers to plant false memories, enabling long-term data exfiltration. The flaw, now patched, highlights the importance of AI security.

The Hacker News logoArs Technica logo

2 Sources

Slack AI Vulnerability Raises Privacy Concerns

Slack AI Vulnerability Raises Privacy Concerns

A security flaw in Slack's AI feature exposed private information, including login details. The issue highlights the potential risks of AI integration in workplace communication tools.

Decrypt logoTechRadar logoDigital Trends logo

3 Sources

OpenAI Confirms ChatGPT Abuse by Hackers for Malware and

OpenAI Confirms ChatGPT Abuse by Hackers for Malware and Election Interference

OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.

Bleeping Computer logoTom's Hardware logoTechRadar logoArs Technica logo

15 Sources

CERN Researchers Uncover AI-Driven Attacks on Computer

CERN Researchers Uncover AI-Driven Attacks on Computer Systems

CERN scientists have discovered a new type of cyber attack that uses AI to exploit vulnerabilities in computer systems. This breakthrough highlights the evolving landscape of cybersecurity threats and the need for advanced defense mechanisms.

CERN logoCERN logo

2 Sources

The Rise of Dark AI: FraudGPT and WormGPT Emerge as New

The Rise of Dark AI: FraudGPT and WormGPT Emerge as New Cybersecurity Threats

Malicious AI models like FraudGPT and WormGPT are becoming the latest tools for cybercriminals, posing significant risks to online security. These AI systems are being used to create sophisticated phishing emails, malware, and other cyber threats.

Business Insider India logoHindustan Times logo

2 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2024 TheOutpost.AI All rights reserved