AI Caricature Trend Sparks Data Privacy Warnings as Experts Flag Security Risks

Reviewed byNidhi Govil

2 Sources

Share

The viral ChatGPT caricature trend has millions sharing AI-generated images of themselves, but cybersecurity experts warn these uploads pose serious risks. Images and personal details shared with OpenAI could be retained indefinitely and potentially exploited by fraudsters for deepfakes, impersonation, and personalized scams.

ChatGPT Caricature Trend Goes Viral Across Social Media

The AI caricature trend has exploded across social media platforms, with users flooding Reddit, X, and other networks with AI-generated cartoon versions of themselves. The trend involves uploading photos to ChatGPT and using prompts like "Create a caricature of me and my job based on everything you know about me." These images typically depict users in cartoon style, surrounded by items reflecting their personality, hobbies, or profession

1

. While the results can be cute and entertaining, cybersecurity experts warn that this viral phenomenon raises serious OpenAI data privacy concerns that users should not ignore

2

.

Source: Mashable

Source: Mashable

Security Risks With AI Chatbots Emerge as Major Concern

Cybersecurity experts have issued stark warnings about the security risks with AI chatbots associated with this trend. When users upload images to AI chatbots, the system processes the photo to extract data such as emotion, environment, or location information, according to cybersecurity consultant Jake Moore

2

. Bob Long, vice-president at age authentication company Daon, emphasized that participants are "doing fraudsters' work for them - giving them a visual representation of who you are." Long even suggested the wording of the trend "sounds like it was intentionally started by a fraudster looking to make the job easy"

2

. The images collected from user content can be retained for an unknown period and potentially used for AI model training as part of their datasets.

Fraudsters Could Exploit Images for Deepfakes and Impersonation

The threat of impersonation and personalized scams looms large. Charlotte Wilson, head of enterprise at Israeli cybersecurity company Check Point, warned that in the wrong hands, a single high-resolution image could be used to create fake social media accounts or realistic deepfakes that could be used to run a scam. "Selfies help criminals move from generic scams to personalised, high-conviction impersonation," Wilson explained

2

. A potential data breach at OpenAI could mean sensitive data, including uploaded images and personal information gathered by the chatbot, could fall into the hands of bad actors ready to exploit it. This reality makes understanding data privacy essential for anyone participating in viral social media trends.

OpenAI Privacy Settings and AI Model Training Practices

OpenAI's privacy settings state that uploaded images may be used to improve the model, which can include training it. When questioned about these practices, ChatGPT clarified that this does not mean every photo is placed in a public database. Instead, the chatbot uses patterns from user content to refine how the system generates images

2

. However, the accuracy of your caricature reveals how much ChatGPT and OpenAI know about you. When one user tried generating a caricature, the results were bland because ChatGPT admitted it "used fun but non-specific caricature tropes" due to lacking deep personal information

1

. More detailed caricatures suggest the platform has accumulated significant data about your habits, interests, and identity.

How to Delete Chat Histories and Protect Your Data Privacy

For users concerned about digital hygiene, OpenAI provides several options to manage your data. ChatGPT saves a history of previous chats, which users can delete to limit the data OpenAI has about them. To delete chat histories for individual conversations, go to the "Your chats" tab in the ChatGPT sidebar, click the three dots next to a chat, and select "Delete." Users can also delete all chats by clicking their profile icon, navigating to "Settings," then "Data controls," and selecting "Delete all chats"

1

. Additionally, users may turn off the "Improve the model for everyone" setting, which allows OpenAI to use your chats for model training. OpenAI also operates a privacy portal where users can submit data deletion requests and opt out of AI data training by clicking "do not train on my content"

2

.

Expert Recommendations for Safer Participation

For those still wanting to follow the trend, experts recommend limiting what you share. Wilson advised users to avoid uploading images that reveal any identifying information: "Crop tightly, keep the background plain, and do not include badges, uniforms, work lanyards, location clues or anything that ties you to an employer or a routine." She also cautioned against oversharing personal information in prompts, such as job title, city, or employer

2

. Moore recommended reviewing privacy settings before participating. Under EU law, users can request the deletion of personal data collected by the company, though OpenAI notes it may retain some information even after data deletion to address fraud, abuse, and security concerns.

Parasocial Relationships With Chatbots Raise Additional Concerns

Beyond immediate security threats, the trend highlights broader concerns about parasocial relationships with chatbots. People use AI chatbots in various ways, and over time, it can feel like more than a generic assistant. Some go to ChatGPT with deeply personal medical questions, while others treat it as a relationship advisor, life coach, or even a close personal friend

1

. If users are developing emotional reliance on ChatGPT or starting to believe the chatbot is "alive" and in a relationship with them, it may be time to take a break. The long-term effects of developing emotional reliance on AI chatbots remain unknown, but experts warn this behavior may be harmful if it takes time and energy away from other relationships, social life, and hobbies. Organizations like Common Sense Media have also warned that AI companions are unsafe for users under 18

1

. Users can also send additional requests, questions, and comments directly to OpenAI using the email address [email protected].

Source: Euronews

Source: Euronews

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo