2 Sources
2 Sources
[1]
ChatGPT caricature trend: What to do if OpenAI knows too much
The ChatGPT caricature trend has gone mega-viral, with countless people sharing AI-generated images of themselves on Reddit, X, and other social media platforms. These images are usually quite cute (though they can be bizarre and unsettling). A typical ChatGPT caricature depicts the user in cartoon style, surrounded by items that reflect their personality, hobbies, or profession. You can see hundreds of examples on X based on simple prompts such as "Create a caricature of me and my job based on everything you know about me." This Tweet is currently unavailable. It might be loading or has been removed. This Tweet is currently unavailable. It might be loading or has been removed. But what if ChatGPT knows you a little too well? The more detailed and accurate your caricature, the more ChatGPT and OpenAI know about you. For instance, when I tried to generate a ChatGPT caricature, the results were painfully bland. When I asked ChatGPT how it decided which details to include in the photo, the chatbot basically admitted it simply picked generic items like headphones and coffee. "Because I don't actually have deep personal info about you (beyond what you've shared in chats), I used fun but non-specific caricature tropes." (Emphasis in original.) So, if your caricature left you feeling a certain type of way, what can you do? It may be time to practice some digital hygiene and take a fresh look at how ChatGPT saves and uses your data. ChatGPT saves a history of your previous chats, which can be helpful. However, you can delete these chats to limit the data that OpenAI has about you. To delete an individual chat, go to the "Your chats" tab in the ChatGPT sidebar. Click the three dots next to a chat and click "Delete." You can also delete all of your chats. To do this, click on your profile icon and click into "Settings," then "Data controls." Here, you can select "Delete all chats." You may also choose to turn off the "Improve the model for everyone" setting, which allows OpenAI to use your chats for model training. OpenAI has a privacy portal, where users can submit data deletion requests. Using the privacy portal, you can submit a variety of privacy-related requests: You can also send additional requests, questions, and comments directly to OpenAI using the email address [email protected]. People use ChatGPT (and other AI chatbots) in a variety of ways, and over time, it can feel like more than a generic assistant. Some people go to Chat with deeply personal medical questions, while others treat ChatGPT as a relationship advisor, a life coach, or even a close personal friend. As Mashable has reported previously, a growing number of people are now using AI for companionship. However, if you believe you've developed a parasocial relationship with large-language models like ChatGPT, then it may be time to reflect on how you interact with this technology. For instance, if you're developing an emotional reliance on ChatGPT, or if you're starting to believe that ChatGPT is "alive" and in a relationship with you, you may want to take a break from Chat. The long-term effects of developing an emotional reliance on AI chatbots are unknown, but experts we've spoken to have warned that this type of behavior may be harmful if it takes time and energy away from your other relationships, social life, and hobbies. Organizations like Common Sense Media have also warned that AI companions are unsafe for users under 18.
[2]
Why you should beware of ChatGPT's AI caricature trend
Images uploaded to AI chatbots could be retained for an unknown amount of time and, if in the wrong hands, could lead to impersonation, scams, and fake social media accounts. The artificial intelligence (AI) created caricature trend that showcases everything a chatbot knows about someone in a colourful picture, can pose serious security risks, according to cybersecurity experts. Users upload a photo of themselves with a company logo or details about their role and ask OpenAI's ChatGPT to create a caricature of them and their job using what the chatbot knows about them. Cybersecurity experts told Euronews Next that social media challenges, such as AI caricatures, can provide fraudsters with a treasure trove of valuable information. A single image, paired with personal details, can be more revealing than users realise. "You are doing fraudsters' work for them - giving them a visual representation of who you are," according to Bob Long, vice-president at age authentication company Daon. The wording of itself should raise red flags, he argued, because it "sounds like it was intentionally started by a fraudster looking to make the job easy." When a user uploads an image to an AI chatbot, the system processes the image to extract data, such as the person's emotion, environment, or information that could disclose their location, according to cybersecurity consultant Jake Moore. That information may then be stored for an unknown period of time. Long said the images collected from users can be used and retained to train AI image generators as part of their datasets. A data breach at a company like OpenAI could mean sensitive data, such as uploaded images and personal information gathered by the chatbot, could fall into the hands of bad actors who could exploit it. In the wrong hands, a single, high-resolution image could be used to create fake social media accounts or realistic AI deepfakes that could be used to run a scam, according to Charlotte Wilson, head of enterprise at Check Point, an Israeli cybersecurity company. "Selfies help criminals move from generic scams to personalised, high-conviction impersonation," she said. OpenAI's privacy settings state that uploaded images may be used to improve the model, which can include training it. When asked about the model's privacy settings, ChatGPT clarified that this does not mean every photo is placed in a public database. Instead, the chatbot said it uses patterns from user content to refine how the system generates images. For those who still want to follow the trend, experts recommend limiting what you share. Wilson said users should avoid uploading images that reveal any identifying information. "Crop tightly, keep the background plain, and do not include badges, uniforms, work lanyards, location clues or anything that ties you to an employer or a routine," she said. Wilson cautioned against oversharing personal information in the prompts, such as their job title, city or employer. Meanwhile, Moore recommended reviewing privacy settings before participating, including the option to remove data from AI training. OpenAI has a privacy portal which lets users opt out of AI data training by clicking on "do not train on my content." Users can also opt out of training from their text conversations with ChatGPT by turning off an "improve the model for everyone" setting. Under EU law, users can request the deletion of personal data collected by the company. However, OpenAI notes it may retain some information even after deletion to address fraud, abuse and security concerns.
Share
Share
Copy Link
The viral ChatGPT caricature trend has millions sharing AI-generated images of themselves, but cybersecurity experts warn these uploads pose serious risks. Images and personal details shared with OpenAI could be retained indefinitely and potentially exploited by fraudsters for deepfakes, impersonation, and personalized scams.
The AI caricature trend has exploded across social media platforms, with users flooding Reddit, X, and other networks with AI-generated cartoon versions of themselves. The trend involves uploading photos to ChatGPT and using prompts like "Create a caricature of me and my job based on everything you know about me." These images typically depict users in cartoon style, surrounded by items reflecting their personality, hobbies, or profession
1
. While the results can be cute and entertaining, cybersecurity experts warn that this viral phenomenon raises serious OpenAI data privacy concerns that users should not ignore2
.
Source: Mashable
Cybersecurity experts have issued stark warnings about the security risks with AI chatbots associated with this trend. When users upload images to AI chatbots, the system processes the photo to extract data such as emotion, environment, or location information, according to cybersecurity consultant Jake Moore
2
. Bob Long, vice-president at age authentication company Daon, emphasized that participants are "doing fraudsters' work for them - giving them a visual representation of who you are." Long even suggested the wording of the trend "sounds like it was intentionally started by a fraudster looking to make the job easy"2
. The images collected from user content can be retained for an unknown period and potentially used for AI model training as part of their datasets.The threat of impersonation and personalized scams looms large. Charlotte Wilson, head of enterprise at Israeli cybersecurity company Check Point, warned that in the wrong hands, a single high-resolution image could be used to create fake social media accounts or realistic deepfakes that could be used to run a scam. "Selfies help criminals move from generic scams to personalised, high-conviction impersonation," Wilson explained
2
. A potential data breach at OpenAI could mean sensitive data, including uploaded images and personal information gathered by the chatbot, could fall into the hands of bad actors ready to exploit it. This reality makes understanding data privacy essential for anyone participating in viral social media trends.OpenAI's privacy settings state that uploaded images may be used to improve the model, which can include training it. When questioned about these practices, ChatGPT clarified that this does not mean every photo is placed in a public database. Instead, the chatbot uses patterns from user content to refine how the system generates images
2
. However, the accuracy of your caricature reveals how much ChatGPT and OpenAI know about you. When one user tried generating a caricature, the results were bland because ChatGPT admitted it "used fun but non-specific caricature tropes" due to lacking deep personal information1
. More detailed caricatures suggest the platform has accumulated significant data about your habits, interests, and identity.For users concerned about digital hygiene, OpenAI provides several options to manage your data. ChatGPT saves a history of previous chats, which users can delete to limit the data OpenAI has about them. To delete chat histories for individual conversations, go to the "Your chats" tab in the ChatGPT sidebar, click the three dots next to a chat, and select "Delete." Users can also delete all chats by clicking their profile icon, navigating to "Settings," then "Data controls," and selecting "Delete all chats"
1
. Additionally, users may turn off the "Improve the model for everyone" setting, which allows OpenAI to use your chats for model training. OpenAI also operates a privacy portal where users can submit data deletion requests and opt out of AI data training by clicking "do not train on my content"2
.Related Stories
For those still wanting to follow the trend, experts recommend limiting what you share. Wilson advised users to avoid uploading images that reveal any identifying information: "Crop tightly, keep the background plain, and do not include badges, uniforms, work lanyards, location clues or anything that ties you to an employer or a routine." She also cautioned against oversharing personal information in prompts, such as job title, city, or employer
2
. Moore recommended reviewing privacy settings before participating. Under EU law, users can request the deletion of personal data collected by the company, though OpenAI notes it may retain some information even after data deletion to address fraud, abuse, and security concerns.Beyond immediate security threats, the trend highlights broader concerns about parasocial relationships with chatbots. People use AI chatbots in various ways, and over time, it can feel like more than a generic assistant. Some go to ChatGPT with deeply personal medical questions, while others treat it as a relationship advisor, life coach, or even a close personal friend
1
. If users are developing emotional reliance on ChatGPT or starting to believe the chatbot is "alive" and in a relationship with them, it may be time to take a break. The long-term effects of developing emotional reliance on AI chatbots remain unknown, but experts warn this behavior may be harmful if it takes time and energy away from other relationships, social life, and hobbies. Organizations like Common Sense Media have also warned that AI companions are unsafe for users under 181
. Users can also send additional requests, questions, and comments directly to OpenAI using the email address [email protected].
Source: Euronews
Summarized by
Navi
1
Technology

2
Business and Economy

3
Science and Research
