4 Sources
4 Sources
[1]
Aid groups scolded for fundraising on AI 'poverty porn'
Researchers accuse tech firms of profiting from exploitative AI imagery The starving child whose picture broke your heart when you saw it on a charity website may not be real. Global health researchers say that stock image companies like Adobe are profiting from AI-generated "poverty porn" that non-profits are using to drum up donations. In an article published in Lancet Global Health, Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp, Belgium, reports that, despite years of pushback in the global health community to discourage the exploitative use of images of suffering, generative AI has compounded the problem by making image generation easily accessible and affordable. Alenichev and co-authors Sonya de Laat, Mark Hann, Patricia Kingori, and Koen Peeters Grietens recently collected more than 100 AI-generated images from various social media sites such as LinkedIn and X. Many of these images, they say, "replicate the emotional intensity and visual grammar of poverty porn and dated fundraising imagery." The term "poverty porn" isn't precisely defined, but generally refers to images or videos that exaggerate poverty or suffering to evoke guilt and drive donations. Concerns about the exploitative use of real imagery go back many years, and have tripped up respected organizations like Doctors Without Borders/Médecins Sans Frontières (MSF), which issued an apology about photo ethics concerns in 2022. In 2023, academics called out the exploitative use of images of malnourished children for fundraising by non-profits and highlighted the bias embedded in AI images, "despite the AI developers' stated commitment to ensure non-abusive depictions of people, their cultures, and communities." That same year, Amnesty International removed an AI-generated image of a protester in response to criticism. One of the defenses offered for showing a fake protester is that AI-generated people can't be targeted for retaliation. Meanwhile, AI firms have made voluntary commitments to disallow image-based sexual abuse but have content policies that don't really address "poverty porn," not to mention other problems like disinformation. Alenichev et al. observe that, for smaller organizations, AI-generated images of suffering children, farmers, or patients, framed with moralizing text, can drive an entire ad campaign. And these groups, they say, appear to believe that, by not showing real people, they're exempt from common ethical concerns that come up when presenting suffering. "A troubling epitome of this trend might be observed in the fact that globally influential tech companies, such as Adobe, are profiteering from AI-generated images depicting stereotypical and exploitative imagery of Black and Brown bodies experiencing vulnerability and poverty," the authors state. Asked to comment, Adobe responded by providing background information about how Adobe Stock is a marketplace through which creators can upload and license content. The company allows generative AI subject to submission standards. Freepik, a creative content platform service based in Málaga, Spain, currently provides a way to filter for AI images - to exclude them or see nothing but AI images. Those searching AI-generated images with the keyword "poverty" will see mostly people with black and brown skin, which may or may not be seen as problematic. Jose Florido, chief market development officer at Freepik, explained in an interview that the company provides a platform that connects content makers and content buyers. "We review everything that is uploaded to the current regulation," Florido explained. "And if it's okay with the current regulation and if there is demand for it, we try to kind of stay away from what is the potential use because, for every type of image, even for very biased images, there are potentially good and totally okay use cases. And there are obviously use cases that can be problematic." Freepik, in other words, polices images as required by law but concerns like "poverty porn" - which isn't clearly defined - are left to those who source their images from the platform. To the extent that lawmakers choose to address this issue, Florido said that the company would welcome clear rules, especially if they're part of an internationally accepted framework. Florido said Freepik has been trying to bring diversity to its images and not just with regard to AI. "There's a lot of bias in development photography," he said. "In the past, for example, if you searched for 'CEO,' you'd only see men. So we balance it to show more diversity." But it's a difficult issue that's never quite done, said Florido, pointing to how Google's efforts to make its AI images more diverse resulted in requests for specific historical figures being implausibly diverse. Fairpicture, a digital content business based in Bern, Switzerland, claims to provide photo and video production services that are "dignified, authentic, and free of stereotypes." The company's site says that it "does not serve cliches" and that all the subjects of its photos are fairly paid for their appearances in the pictures. While generative AI may address the communication and marketing needs of budget-constrained organizations, it may also end up eroding public trust and harming global health, the researchers say. What's more, generative AI appears to be ill-suited for fundraising. In 2022, a separate group of academics looked at the impact of synthetic (AI) content on charitable giving. They found when people are aware an image is fake or generated by AI, it "has a negative impact on donation intentions." Alenichev et al. argue that the rapid development and adoption of generative AI requires accountability and transparency. "The use of AI and the prompts that underlie the images should be disclosed, these prompts especially could curtail the amplification of poverty porn via AI imagery," they wrote. "It is now pivotally important to support local photographers and artists and their attempts to create global health representation beyond the established norms." ®
[2]
Aid Agencies Are Using AI Images Instead of Real Photos
AI-generated images are rapidly replacing traditional photography in portraying extreme poverty, while stock photo websites are increasingly flooded with visuals created using Midjourney, Stable Diffusion, and other image generators. In an article for The Lancet, researcher Arsenii Alenichev calls the phenomenon "poverty porn 2.0." "It is quite clear that various organizations are starting to consider synthetic images instead of real photography, because it's cheap and you don't need to bother with consent and everything," says Alenichev. A search for "poverty" on Adobe Stock and Freepik -- two popular stock photo websites -- turns up scores of AI-generated images CEO of Freepik Joaquín Abela tells the Guardian that consumers are driving demand for AI images, claiming it is not the platform's responsibility. "It's like trying to dry the ocean. We make an effort, but in reality, if customers worldwide want images a certain way, there is absolutely nothing that anyone can do," Abela says. Even big players like the United Nations have turned to AI: a video depicting "re-enactments" of sexual violence in conflict was taken down after the Guardian contacted them. However, a video (below) from Plan International remains online that features AI imagery. For years, the debate around "poverty porn" and photography has raged, with arguments being made that subjects are reduced to mere props; think of the staged photograph by Edwin Ong Wee Kee. "It saddens me that the fight for more ethical representation of people experiencing poverty now extends to the unreal," Kate Kardol, an NGO communications consultant, tells the Guardian. Alenichev, who works at the Institute of Tropical Medicine in Antwerp, Belgium, says that the AI images "replicate the visual grammar of poverty" and he's collected hundreds of AI images used by aid agencies that are not only unreal but also perpetuate stereotypes. Adobe Stock, which didn't comment on the Guardian's story, earlier this year denied reports that the website's portfolio was nearly 50 percent AI-generated images. Photographer Robert Kneschke's research found that 300 million AI images were uploaded in just three years -- it took 20 years for photographers to upload that many real photos to Adobe Stock. While Adobe disputes those numbers, what is clear is just how easy it is to make AI images that look like real photos.
[3]
AI-generated 'poverty porn' fake images being used by aid agencies
Exclusive: Pictures depicting the most vulnerable and poorest people are being used in social media campaigns in the sector, driven by concerns over consent and cost AI-generated images of extreme poverty, children and sexual violence survivors are flooding stock photo sites and increasingly being used by leading health NGOs, according to global health professionals who have voiced concern over a new era of "poverty porn". "All over the place, people are using it," said Noah Arnold, who works at Fairpicture, a Swiss-based organisation focused on promoting ethical imagery in global development. "Some are actively using AI imagery, and others, we know that they're experimenting at least." Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp studying the production of global health images, said: "The images replicate the visual grammar of poverty - children with empty plates, cracked earth, stereotypical visuals." Alenichev has collected more than 100 AI-generated images of extreme poverty used by individuals or NGOs as part of social media campaigns against hunger or sexual violence. Images he shared with the Guardian show exaggerated, stereotype-perpetuating scenes: children huddled together in muddy water; an African girl in a wedding dress with a tear staining her cheek. In a comment piece published on Thursday in the Lancet Global Health, he argues these images amount to "poverty porn 2.0". While it is hard to quantify the prevalence of the AI-generated images, Alenichev and others say their use is on the rise, driven by concerns over consent and cost. Arnold said that US funding cuts to NGO budgets had made matters worse. "It is quite clear that various organisations are starting to consider synthetic images instead of real photography, because it's cheap and you don't need to bother with consent and everything," said Alenichev. AI-generated images of extreme poverty now appear in their dozens on popular stock photo sites, including Adobe Stock Photos and Freepik, in response to queries such as "poverty". Many bear captions such as "Photorealistic kid in refugee camp"; "Asian children swim in a river full of waste"; and "Caucasian white volunteer provides medical consultation to young black children in African village". Adobe sells licences to the last two photos in that list for about £60. "They are so racialised. They should never even let those be published because it's like the worst stereotypes about Africa, or India, or you name it," said Alenichev. Joaquín Abela, CEO of Freepik, said the responsibility for using such extreme images lay with media consumers, and not with platforms such as his. The AI stock photos, he said, are generated by the platform's global community of users, who can receive a licensing fee when Freepik's customers choose to buy their images. Freepik had attempted to curb biases it had found in other parts of its photo library, he said, by "injecting diversity" and trying to ensure gender balance into the photos of lawyers and CEOs hosted on the site. But, he said, there was only so much his platform could do. "It's like trying to dry the ocean. We make an effort, but in reality, if customers worldwide want images a certain way, there is absolutely nothing that anyone can do." In the past, leading charities have used AI-generated images as part of their communications strategies on global health. In 2023, the Dutch arm of UK charity Plan International released a video campaign against child marriage containing AI-generated images of a girl with a black eye, an older man and a pregnant teenager. Last year, the UN posted a video on YouTube with AI-generated "re-enactments" of sexual violence in conflict, which included AI-generated testimony from a Burundian woman describing being raped by three men and left to die in 1993 during the country's civil war. The video was removed after the Guardian contacted the UN for comment. A UN Peacekeeping spokesperson said: "The video in question, which was produced over a year ago using a fast-evolving tool, has been taken down, as we believed it shows improper use of AI, and may pose risks regarding information integrity, blending real footage and near-real artificially generated content. "The United Nations remains steadfast in its commitment to support victims of conflict-related sexual violence, including through innovation and creative advocacy." Arnold said the rising use of these AI images comes after years of debate in the sector around ethical imagery and dignified storytelling about poverty and violence. "Supposedly, it's easier to take ready-made AI visuals that come without consent, because it's not real people." Kate Kardol, an NGO communications consultant, said the images frightened her, and recalled earlier debates about the use of "poverty porn" in the sector. "It saddens me that the fight for more ethical representation of people experiencing poverty now extends to the unreal," she said. Generative AI tools have long been found to replicate - and at times exaggerate - broader societal biases. The proliferation of biased images in global health communications may make the problem worse, said Alenichev, because the images could filter out into the wider internet and be used to train the next generation of AI models, a process which has been shown to amplify prejudice. A spokesperson for Plan International said the NGO had, as of this year: "adopted guidance advising against using AI to depict individual children", and said the 2023 campaign had used AI-generated imagery to safeguard "the privacy and dignity of real girls".
[4]
Charities Using AI-Generated Photos of Starving Children to Raise Money
The scenes are grisly: stick-thin children huddling together in a muddy stream, white volunteers surrounded by throngs of starving Africans, and Arab children in refugee camps holding tin bowls. The only problem? None of them are real. In a macabre phenomenon sweeping the world's leading non-government organizations, The Guardian reports, charity groups are now weaponizing AI to produce heavily racialized misery-slop -- replete with nonexistent imagery of poverty, violence, and climate disasters. "The images replicate the visual grammar of poverty -- children with empty plates, cracked earth, stereotypical visuals," Arsenii Alenichev, a researcher at the Institute of Tropical Medicine, told the newspaper. Alenichev was the lead author on a commentary article recently published in the journal The Lancet covering the issue of charities and AI suffering, something he calls "poverty porn 2.0." The researcher distinguishes the AI phenomenon from earlier ideas of poverty porn, a term coined in 2007 to describe the kinds of voyeuristic imagery taken of poor or oppressed people in order to shock viewers in rich, developed countries. Then, the goal was to goad viewers into donating after shocking their senses, playing into the fantasy that their charity will solve the problem. In poverty porn 2.0, the subject of the images have become the fantasy, avoiding even the financial and ethical costs of capturing real suffering. "It is quite clear that various organizations are starting to consider synthetic images instead of real photography, because it's cheap and you don't need to bother with consent and everything," Alenichev told The Guardian. Altogether, the researcher says he's collected over 100 synthetic images being used by charities in their campaigns to raise money. They come from groups like the UK's Plan International, which posted AI-generated images as part of an anti-child marriage campaign, and even the United Nations, which The Guardian says generated "re-enactments" of sexual violence. It's a particularly disgusting move given that there's no shortage of poor and immiserated people in the real world. Doubly ironic is the fact that AI is behind this new type of poverty porn is fueled by the same wealth inequality and pollution that many charities are ostensibly working to ameliorate. If NGO officials are really interested in ending global suffering, their best bet would be to stop fantasizing about poverty and start asking why people remain poor in the first place.
Share
Share
Copy Link
Aid agencies and charities are increasingly using AI-generated images of extreme poverty for fundraising, sparking ethical concerns and debates about representation and consent in the digital age.
In a concerning trend, aid agencies and charities are increasingly turning to AI-generated images of extreme poverty for their fundraising campaigns. This shift, dubbed 'poverty porn 2.0' by researchers, has sparked a heated debate about ethics, representation, and consent in the digital age
1
.Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp, Belgium, has collected over 100 AI-generated images used by individuals or NGOs in social media campaigns against hunger or sexual violence. These images often depict exaggerated, stereotype-perpetuating scenes, such as children huddled in muddy water or African girls in wedding dresses with tears staining their cheeks
3
.The use of AI-generated imagery in charity campaigns raises several ethical concerns. Critics argue that these images perpetuate harmful stereotypes and reduce complex social issues to simplistic, emotionally manipulative visuals. Moreover, the ease of creating AI images has led some organizations to bypass the ethical considerations typically associated with photographing real people in vulnerable situations
2
.Noah Arnold of Fairpicture, a Swiss-based organization promoting ethical imagery in global development, notes, "Some are actively using AI imagery, and others, we know that they're experimenting at least"
3
.Popular stock photo websites like Adobe Stock and Freepik have become repositories for these AI-generated images of poverty. A search for "poverty" on these platforms yields numerous AI-created visuals, often with captions like "Photorealistic kid in refugee camp" or "Asian children swim in a river full of waste"
3
.Joaquín Abela, CEO of Freepik, argues that the responsibility for using such extreme images lies with media consumers, not the platforms. However, this stance has been criticized by those who believe platforms should take a more active role in curating content
3
.Related Stories
Several high-profile organizations have faced backlash for their use of AI-generated imagery:
3
.3
.1
.As the debate continues, many in the sector are calling for clearer guidelines and ethical standards for the use of AI-generated imagery in charity campaigns. Kate Kardol, an NGO communications consultant, expresses concern: "It saddens me that the fight for more ethical representation of people experiencing poverty now extends to the unreal"
3
.The challenge for the humanitarian sector moving forward will be to balance the need for impactful storytelling with ethical considerations, ensuring that their campaigns do not perpetuate harmful stereotypes or exploit the very people they aim to help.
Summarized by
Navi
[1]
26 Feb 2025•Science and Research
22 Jul 2024
01 Apr 2025•Technology
1
Technology
2
Business and Economy
3
Business and Economy