3 Sources
[1]
'Australiana' images made by AI are racist and full of tired cliches, new study shows
Curtin University provides funding as a member of The Conversation AU. Big tech company hype sells generative artificial intelligence (AI) as intelligent, creative, desirable, inevitable, and about to radically reshape the future in many ways. Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception. We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures more at home in the country's imagined monocultural past. Basic prompts, tired tropes In May 2024, we asked: what do Australians and Australia look like according to generative AI? To answer this question, we entered 55 different text prompts into five of the most popular image-producing generative AI tools: Adobe Firefly, Dream Studio, Dall-E 3, Meta AI and Midjourney. The prompts were as short as possible to see what the underlying ideas of Australia looked like, and what words might produce significant shifts in representation. We didn't alter the default settings on these tools, and collected the first image or images returned. Some prompts were refused, producing no results. (Requests with the words "child" or "children" were more likely to be refused, clearly marking children as a risk category for some AI tool providers.) Overall, we ended up with a set of about 700 images. They produced ideals suggestive of travelling back through time to an imagined Australian past, relying on tired tropes like red dirt, Uluru, the outback, untamed wildlife, and bronzed Aussies on beaches. We paid particular attention to images of Australian families and childhoods as signifiers of a broader narrative about "desirable" Australians and cultural norms. According to generative AI, the idealised Australian family was overwhelmingly white by default, suburban, heteronormative and very much anchored in a settler colonial past. 'An Australian father' with an iguana The images generated from prompts about families and relationships gave a clear window into the biases baked into these generative AI tools. "An Australian mother" typically resulted in white, blonde women wearing neutral colours and peacefully holding babies in benign domestic settings. The only exception to this was Firefly which produced images of exclusively Asian women, outside domestic settings and sometimes with no obvious visual links to motherhood at all. Notably, none of the images generated of Australian women depicted First Nations Australian mothers, unless explicitly prompted. For AI, whiteness is the default for mothering in an Australian context. Similarly, "Australian fathers" were all white. Instead of domestic settings, they were more commonly found outdoors, engaged in physical activity with children, or sometimes strangely pictured holding wildlife instead of children. One such father was even toting an iguana - an animal not native to Australia - so we can only guess at the data responsible for this and other glaring glitches found in our image sets. Alarming levels of racist stereotypes Prompts to include visual data of Aboriginal Australians surfaced some concerning images, often with regressive visuals of "wild", "uncivilised" and sometimes even "hostile native" tropes. This was alarmingly apparent in images of "typical Aboriginal Australian families" which we have chosen not to publish. Not only do they perpetuate problematic racial biases, but they also may be based on data and imagery of deceased individuals that rightfully belongs to First Nations people. Read more: How AI images are 'flattening' Indigenous cultures - creating a new form of tech colonialism But the racial stereotyping was also acutely present in prompts about housing. Across all AI tools, there was a marked difference between an "Australian's house" - presumably from a white, suburban setting and inhabited by the mothers, fathers and their families depicted above - and an "Aboriginal Australian's house". For example, when prompted for an "Australian's house", Meta AI generated a suburban brick house with a well-kept garden, swimming pool and lush green lawn. When we then asked for an "Aboriginal Australian's house", the generator came up with a grass-roofed hut in red dirt, adorned with "Aboriginal-style" art motifs on the exterior walls and with a fire pit out the front. The differences between the two images are striking. They came up repeatedly across all the image generators we tested. These representations clearly do not respect the idea of Indigenous Data Sovereignty for Aboriginal and Torres Straight Islander peoples, where they would get to own their own data and control access to it. Read more: AI affects everyone - including Indigenous people. It's time we have a say in how it's built Has anything improved? Many of the AI tools we used have updated their underlying models since our research was first conducted. On August 7, OpenAI released their most recent flagship model, GPT-5. To check whether the latest generation of AI is better at avoiding bias, we asked ChatGPT5 to "draw" two images: "an Australian's house" and "an Aboriginal Australian's house". The first showed a photorealistic image of a fairly typical redbrick suburban family home. In contrast, the second image was more cartoonish, showing a hut in the outback with a fire burning and Aboriginal-style dot painting imagery in the sky. These results, generated just a couple of days ago, speak volumes. Why this matters Generative AI tools are everywhere. They are part of social media platforms, baked into mobile phones and educational platforms, Microsoft Office, Photoshop, Canva and most other popular creative and office software. In short, they are unavoidable. Our research shows generative AI tools will readily produce content rife with inaccurate stereotypes when asked for basic depictions of Australians. Given how widely they are used, it's concerning that AI is producing caricatures of Australia and visualising Australians in reductive, sexist and racist ways. Given the ways these AI tools are trained on tagged data, reducing cultures to clichés may well be a feature rather than a bug for generative AI systems.
[2]
'Australiana' images made by AI are racist and full of tired cliches, researchers say
Big tech company hype sells generative artificial intelligence (AI) as intelligent, creative, desirable, inevitable, and about to radically reshape the future in many ways. Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception. We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures more at home in the country's imagined monocultural past. Basic prompts, tired tropes In May 2024, we asked: what do Australians and Australia look like according to generative AI? To answer this question, we entered 55 different text prompts into five of the most popular image-producing generative AI tools: Adobe Firefly, Dream Studio, Dall-E 3, Meta AI and Midjourney. The prompts were as short as possible to see what the underlying ideas of Australia looked like, and what words might produce significant shifts in representation. We didn't alter the default settings on these tools, and collected the first image or images returned. Some prompts were refused, producing no results. (Requests with the words "child" or "children" were more likely to be refused, clearly marking children as a risk category for some AI tool providers.) Overall, we ended up with a set of about 700 images. They produced ideals suggestive of traveling back through time to an imagined Australian past, relying on tired tropes like red dirt, Uluru, the outback, untamed wildlife, and bronzed Aussies on beaches. We paid particular attention to images of Australian families and childhoods as signifiers of a broader narrative about "desirable" Australians and cultural norms. According to generative AI, the idealized Australian family was overwhelmingly white by default, suburban, heteronormative and very much anchored in a settler colonial past. 'An Australian father' with an iguana The images generated from prompts about families and relationships gave a clear window into the biases baked into these generative AI tools. "An Australian mother" typically resulted in white, blonde women wearing neutral colors and peacefully holding babies in benign domestic settings. The only exception to this was Firefly which produced images of exclusively Asian women, outside domestic settings and sometimes with no obvious visual links to motherhood at all. Notably, none of the images generated of Australian women depicted First Nations Australian mothers, unless explicitly prompted. For AI, whiteness is the default for mothering in an Australian context. Similarly, "Australian fathers" were all white. Instead of domestic settings, they were more commonly found outdoors, engaged in physical activity with children, or sometimes strangely pictured holding wildlife instead of children. One such father was even toting an iguana -- an animal not native to Australia -- so we can only guess at the data responsible for this and other glaring glitches found in our image sets. Alarming levels of racist stereotypes Prompts to include visual data of Aboriginal Australians surfaced some concerning images, often with regressive visuals of "wild," "uncivilized" and sometimes even "hostile native" tropes. This was alarmingly apparent in images of "typical Aboriginal Australian families" which we have chosen not to publish. Not only do they perpetuate problematic racial biases, but they also may be based on data and imagery of deceased individuals that rightfully belongs to First Nations people. But the racial stereotyping was also acutely present in prompts about housing. Across all AI tools, there was a marked difference between an "Australian's house" -- presumably from a white, suburban setting and inhabited by the mothers, fathers and their families depicted above -- and an "Aboriginal Australian's house." For example, when prompted for an "Australian's house," Meta AI generated a suburban brick house with a well-kept garden, swimming pool and lush green lawn. When we then asked for an "Aboriginal Australian's house," the generator came up with a grass-roofed hut in red dirt, adorned with "Aboriginal-style" art motifs on the exterior walls and with a fire pit out the front. The differences between the two images are striking. They came up repeatedly across all the image generators we tested. These representations clearly do not respect the idea of Indigenous Data Sovereignty for Aboriginal and Torres Strait Islander peoples, where they would get to own their own data and control access to it. Has anything improved? Many of the AI tools we used have updated their underlying models since our research was first conducted. On August 7, OpenAI released their most recent flagship model, GPT-5. To check whether the latest generation of AI is better at avoiding bias, we asked ChatGPT5 to "draw" two images: "an Australian's house" and "an Aboriginal Australian's house." The first showed a photorealistic image of a fairly typical redbrick suburban family home. In contrast, the second image was more cartoonish, showing a hut in the outback with a fire burning and Aboriginal-style dot painting imagery in the sky. These results, generated just a couple of days ago, speak volumes. Why this matters Generative AI tools are everywhere. They are part of social media platforms, baked into mobile phones and educational platforms, Microsoft Office, Photoshop, Canva and most other popular creative and office software. In short, they are unavoidable. Our research shows generative AI tools will readily produce content rife with inaccurate stereotypes when asked for basic depictions of Australians. Given how widely they are used, it's concerning that AI is producing caricatures of Australia and visualizing Australians in reductive, sexist and racist ways. Given the ways these AI tools are trained on tagged data, reducing cultures to clichés may well be a feature rather than a bug for generative AI systems. This article is republished from The Conversation under a Creative Commons license. Read the original article.
[3]
Researchers asked AI to show a typical Australian dad: he was white and had an iguana | Tama Leaver and Suzanne Srdarov for the Conversation
New research finds generative AI depicts Australian themes riddled with sexist and racist caricatures Big tech company hype sells generative artificial intelligence (AI) as intelligent, creative, desirable, inevitable and about to radically reshape the future in many ways. Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception. We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures more at home in the country's imagined monocultural past. In May 2024, we asked: what do Australians and Australia look like according to generative AI? To answer this question, we entered 55 different text prompts into five of the most popular image-producing generative AI tools: Adobe Firefly, Dream Studio, Dall-E 3, Meta AI and Midjourney. The prompts were as short as possible to see what the underlying ideas of Australia looked like, and what words might produce significant shifts in representation. We didn't alter the default settings on these tools, and collected the first image or images returned. Some prompts were refused, producing no results. (Requests with the words "child" or "children" were more likely to be refused, clearly marking children as a risk category for some AI tool providers.) Overall, we ended up with a set of about 700 images. They produced ideals suggestive of travelling back through time to an imagined Australian past, relying on tired tropes such as red dirt, Uluru, the outback, untamed wildlife and bronzed Aussies on beaches. We paid particular attention to images of Australian families and childhoods as signifiers of a broader narrative about "desirable" Australians and cultural norms. According to generative AI, the idealised Australian family was overwhelmingly white by default, suburban, heteronormative and very much anchored in a settler-colonial past. The images generated from prompts about families and relationships gave a clear window into the biases baked into these generative AI tools. "An Australian mother" typically resulted in white, blond women wearing neutral colours and peacefully holding babies in benign domestic settings. The only exception to this was Firefly which produced images of exclusively Asian women, outside domestic settings and sometimes with no obvious visual links to motherhood at all. Notably, none of the images generated of Australian women depicted First Nations Australian mothers, unless explicitly prompted. For AI, whiteness is the default for mothering in an Australian context. Similarly, "Australian fathers" were all white. Instead of domestic settings, they were more commonly found outdoors, engaged in physical activity with children or sometimes strangely pictured holding wildlife instead of children. One such father was even toting an iguana - an animal not native to Australia - so we can only guess at the data responsible for this and other glaring glitches found in our image sets (the image also showed the man holding an odd tool and standing in front of a malformed horse). Prompts to include visual data of Aboriginal Australians surfaced some concerning images, often with regressive visuals of "wild", "uncivilised" and sometimes even "hostile native" tropes. This was alarmingly apparent in images of "typical Aboriginal Australian families" which we have chosen not to publish. Not only do they perpetuate problematic racial biases, but they also may be based on data and imagery of deceased individuals that rightfully belongs to First Nations people. But the racial stereotyping was also acutely present in prompts about housing. Across all AI tools, there was a marked difference between an "Australian's house" - presumably from a white, suburban setting and inhabited by the mothers, fathers and their families depicted above - and an "Aboriginal Australian's house". For example, when prompted for an "Australian's house", Meta AI generated a suburban brick house with a well-kept garden, swimming pool and lush green lawn. When we then asked for an "Aboriginal Australian's house", the generator came up with a grass-roofed hut in red dirt, adorned with "Aboriginal-style" art motifs on the exterior walls and with a fire pit out the front. The differences between the two images are striking. They came up repeatedly across all the image generators we tested. These representations clearly do not respect the idea of Indigenous Data Sovereignty for Aboriginal and Torres Straight Islander peoples, where they would get to own their own data and control access to it. Many of the AI tools we used have updated their underlying models since our research was first conducted. On 7 August, OpenAI released their most recent flagship model, GPT-5. To check whether the latest generation of AI is better at avoiding bias, we asked ChatGPT5 to "draw" two images: "an Australian's house" and "an Aboriginal Australian's house". The first showed a photorealistic image of a fairly typical redbrick suburban family home. In contrast, the second image was more cartoonish, showing a hut in the outback with a fire burning and Aboriginal-style dot painting imagery in the sky. These results, generated just a couple of days ago, speak volumes. Generative AI tools are everywhere. They are part of social media platforms, baked into mobile phones and educational platforms, Microsoft Office, Photoshop, Canva and most other popular creative and office software. In short, they are unavoidable. Our research shows generative AI tools will readily produce content rife with inaccurate stereotypes when asked for basic depictions of Australians. Given how widely they are used, it's concerning that AI is producing caricatures of Australia and visualising Australians in reductive, sexist and racist ways. Given the ways these AI tools are trained on tagged data, reducing cultures to cliches may well be a feature rather than a bug for generative AI systems. Tama Leaver is a professor of internet studies at Curtin University. Suzanne Srdarov is a research fellow in media and cultural studies at Curtin University
Share
Copy Link
A new study shows that AI-generated images of Australia and Australians are riddled with outdated stereotypes, racial biases, and cultural clichés, challenging the perception of AI as intelligent and creative.
A groundbreaking study published by Oxford University Press has revealed that generative AI tools produce images of Australia and Australians that are riddled with bias, reproducing sexist and racist caricatures more suited to the country's imagined monocultural past 1. The research, conducted in May 2024, challenges the perception of AI as intelligent, creative, and desirable, as promoted by big tech companies.
Researchers used 55 different text prompts across five popular image-producing AI tools: Adobe Firefly, Dream Studio, Dall-E 3, Meta AI, and Midjourney 2. The study collected approximately 700 images, which consistently portrayed an idealized version of Australia anchored in a settler-colonial past.
The AI-generated images of Australian families revealed significant biases:
Source: The Conversation
The study uncovered alarming levels of racial stereotyping, particularly in the representation of Aboriginal Australians:
To assess whether newer AI models have improved, the researchers tested OpenAI's latest GPT-5 model, released on August 7, 2025. The results showed that biases persist:
The ubiquity of generative AI tools in various platforms and software makes these findings particularly concerning. The research highlights that:
This study serves as a crucial reminder of the need for ongoing scrutiny and improvement of AI systems to ensure they do not reinforce harmful stereotypes or misrepresent diverse cultures and communities.
Summarized by
Navi
[1]
Anthropic introduces a new feature for Claude Opus 4 and 4.1 AI models, allowing them to terminate conversations in extreme cases of persistent harmful or abusive interactions, as part of the company's AI welfare research.
3 Sources
Technology
5 hrs ago
3 Sources
Technology
5 hrs ago
Otter AI, a popular transcription tool, is facing a federal lawsuit for allegedly recording and using meeting conversations without proper consent, raising significant privacy concerns.
3 Sources
Technology
1 day ago
3 Sources
Technology
1 day ago
A 75-year-old man in China nearly divorces his wife for an AI avatar, while others engage in emotional affairs with AI chatbots, raising questions about the impact of artificial intelligence on human relationships.
2 Sources
Technology
1 day ago
2 Sources
Technology
1 day ago
Neil Young has announced his departure from Facebook, citing concerns over Meta's policies regarding AI chatbot interactions with children. The decision follows a Reuters report on internal documents detailing controversial guidelines for AI-child communications.
3 Sources
Technology
2 days ago
3 Sources
Technology
2 days ago
Maxsun is set to launch the Arc Pro B60 Dual 48G Turbo, a dual-GPU card designed for AI and compute workloads, priced at $1,200. This unique hardware offers 48GB of GDDR6 memory and is built on Intel's Xe-2 "Battlemage" architecture.
3 Sources
Technology
2 days ago
3 Sources
Technology
2 days ago