4 Sources
4 Sources
[1]
Australia set to ban 'nudify' apps. How will it work?
RMIT University provides funding as a strategic partner of The Conversation AU. The Australian government has announced plans to ban "nudify" tools and hold tech platforms accountable for failing to prevent users from accessing them. This is part of the government's overall strategy to move towards a "digital duty of care" approach to online safety. This approach places legal responsibility on tech companies to take proactive steps to identify and prevent online harms on their platforms and services. So how will the nudify ban happen in practice? And will it be effective? How are nudify tools being used? Nudify or "undress" tools are available on app stores and websites. They use artificial intelligence (AI) methods to create realistic but fake sexually explicit images of people. Users can upload a clothed, everyday photo which the tool analyses and then digitally removes the person's clothing by putting their face onto a nude body (or what the AI "thinks" the person would look like naked). The problem is that nudify tools are easy to use and access. The images they create can also look highly realistic and can cause significant harms, including bullying, harassment, distress, anxiety, reputational damage and self-harm. These apps - and other AI tools used to generate image-based abuse material - are an increasing problem. In June this year, Australia's eSafety Commissioner revealed that reports of deepfakes and other digitally altered images of people under 18 have more than doubled in the past 18 months. In the first half of 2024, 16 nudify websites that were named in a lawsuit issued by the San Francisco City Attorney David Chiu were visited more than 200 million times. In a July 2025 study, 85 nudify websites had a combined average of 18.5 million visitors for the preceding six months. Some 18 of the websites - which rely on tech services such as Google's sign-on system, or Amazon and Cloudflare's hosting or content delivery services - made between US$2.6 million and $18.4 million in the past six months. Aren't nudify tools already illegal? For adults, sharing (or threatening to share) non-consensual deepfake sexualised images is a criminal offence under most Australian state, federal and territory laws. But aside from Victoria and New South Wales, it is not currently a criminal offence to create digitally generated intimate images of adults. For children and adolescents under 18, the situation is slightly different. It's a criminal offence not only to share child sexual abuse material (including fictional, cartoon or fake images generated using AI), but also to create, access, possess and solicit this material. Developing, hosting and promoting the use of these tools for creating either adult or child content is not currently illegal in Australia. Last month, independent federal MP Kate Chaney introduced a bill that would make it a criminal offence to download, access, supply or offer access to nudify apps and other tools of which the dominant or sole purpose is the creation of child sexual abuse material. The government has not taken on this bill. It instead wants to focus on placing the onus on technology companies. How will the nudify ban actually work? Minister for Communications, Anika Wells, said the government will work closely with industry to figure out the best way to proactively restrict access to nudify tools. At this point, it's unclear what the time frames are or how the ban will work in practice. It might involve the government "geoblocking" access to nudify sites, or directing the platforms to remove access (including advertising links) to the tools. It might also involve transparency reporting from platforms on what they're doing to address the problem, including risk assessments for illegal and harmful activity. But government bans and industry collaboration won't completely solve the problem. Users can get around geographic restrictions with VPNs or proxy servers. The tools can also be used "off the radar" via file-sharing platforms, private forums or messaging apps that already host nudify chatbots. Open-source AI models can also be fine-tuned to create new nudify tools. What are tech companies already doing? Some tech companies have already taken action against nudify tools. Discord and Apple have removed nudify apps and developer accounts associated with nudify apps and websites. Meta also bans adult content, including AI-generated nudes. However, Meta came under fire for inadvertently promoting nudify apps through advertisements - even though those ads violate the company's standards. The company recently filed a lawsuit against Hong Kong nudify company CrushAI, after the company ran more than 87,000 ads across Meta platforms in violation of Meta's rules on non-consensual intimate imagery. Tech companies can do much more to mitigate harms from nudify and other deepfake tools. For example, they can ensure guardrails are in place for deepfake generators, remove content more quickly, and ban or suspend user accounts. They can restrict search results and block keywords such as "undress" or "nudify", issue "nudges" or warnings to people using related search terms, and use watermarking and provenance indicators to identify the origins of images. They can also work collaboratively together to share signals of suspicious activity (for example, advertising attempts) and share digital hashes (a unique code like a fingerprint) of known image-based abuse or child sexual abuse content with other platforms to prevent recirculation. Education is also key Placing the onus on tech companies and ensuring they are held accountable to reduce the harms from nudify tools is important. But it's not going to stop the problem. Education must also be a key focus. Young people need comprehensive education on how to critically examine and discuss digital information and content, including digital data privacy, digital rights and respectful digital relationships. Digital literacy and respectful relationships education shouldn't be based on shame and fear-based messaging but rather on affirmative consent. That means giving young people the skills to recognise and negotiate consent to receive, request and share intimate images, including deepfake images. We need effective bystander interventions. This means teaching bystanders how to effectively and safely challenge harmful behaviours and how to support victim-survivors of deepfake abuse. We also need well-resourced online and offline support systems so victim-survivors, perpetrators, bystanders and support persons can get the help they need. If this article has raised issues for you, call 1800RESPECT on 1800 737 732 or visit the eSafety Commissioner's website for helpful online safety resources. You can also contact Lifeline crisis support on 13 11 14 or text 0477 13 11 14, Suicide Call Back Services on 1300 659 467, or Kids Helpline on 1800 55 1800 (for young people aged 5-25). If you or someone you know is in immediate danger, call the police on 000.
[2]
Australia to tackle deepfake nudes, online stalking
Australia said Tuesday it will oblige tech giants to prevent online tools being used to create AI-generated nude images or stalk people without detection. The government will work with industry on developing new legislation against the "abhorrent technologies," it said in a statement, without providing a timeline. "There is no place for apps and technologies that are used solely to abuse, humiliate and harm people, especially our children," Communications Minister Anika Wells said. "Nudify" apps -- artificial intelligence tools that digitally strip off clothing -- have exploded online, sparking warnings that so-called sextortion scams targeting children are surging. The government will use "every lever" to restrict access to "nudify" and stalking apps, placing the onus on tech companies to block them, Wells said. "While this move won't eliminate the problem of abusive technology in one fell swoop, alongside existing laws and our world-leading online safety reforms, it will make a real difference in protecting Australians," she added. The proliferation of AI tools has led to new forms of abuse impacting children, including pornography scandals at universities and schools worldwide, where teenagers create sexualized images of their classmates. A recent Save the Children survey found that one in five young people in Spain have been victims of deepfake nudes, with those images shared online without their consent. Any new legislation will aim to ensure that legitimate and consent-based artificial intelligence and online tracking services are not inadvertently impacted, the government said. 'Rushed' Australia has been at the forefront of global efforts to curb internet harm, especially that targeted at children. The country passed landmark laws in November restricting under-16s from social media -- one of the world's toughest crackdowns on popular sites such as Facebook, Instagram, YouTube and X. Social media giants -- which face fines of up to Aus$49.5 million (US$32 million) if they fail to comply with the teen ban -- have described the laws as "vague," "problematic" and "rushed." It is unclear how people will verify their ages in order to sign up to social media. The law comes into force by the end of this year. An independent study ordered by the government found this week that age checking can be done "privately, efficiently and effectively." Age assurance is possible through a range of technologies but "no single solution fits all contexts," the study's final report said.
[3]
Australia to tackle deepfake nudes, online stalking
Sydney (AFP) - Australia said Tuesday it will oblige tech giants to prevent online tools being used to create AI-generated nude images or stalk people without detection. The government will work with industry on developing new legislation against the "abhorrent technologies", it said in a statement, without providing a timeline. "There is no place for apps and technologies that are used solely to abuse, humiliate and harm people, especially our children," Communications Minister Anika Wells said. "Nudify" apps -- artificial intelligence tools that digitally strip off clothing -- have exploded online, sparking warnings that so-called sextortion scams targeting children are surging. The government will use "every lever" to restrict access to "nudify" and stalking apps, placing the onus on tech companies to block them, Wells said. "While this move won't eliminate the problem of abusive technology in one fell swoop, alongside existing laws and our world-leading online safety reforms, it will make a real difference in protecting Australians," she added. The proliferation of AI tools has led to new forms of abuse impacting children, including pornography scandals at universities and schools worldwide, where teenagers create sexualized images of their classmates. A recent Save the Children survey found that one in five young people in Spain have been victims of deepfake nudes, with those images shared online without their consent. Any new legislation will aim to ensure that legitimate and consent-based artificial intelligence and online tracking services are not inadvertently impacted, the government said. 'Rushed' Australia has been at the forefront of global efforts to curb internet harm, especially that targeted at children. The country passed landmark laws in November restricting under-16s from social media -- one of the world's toughest crackdowns on popular sites such as Facebook, Instagram, YouTube and X. Social media giants -- which face fines of up to Aus$49.5 million (US$32 million) if they fail to comply with the teen ban -- have described the laws as "vague", "problematic" and "rushed". It is unclear how people will verify their ages in order to sign up to social media. The law comes into force by the end of this year. An independent study ordered by the government found this week that age checking can be done "privately, efficiently and effectively". Age assurance is possible through a range of technologies but "no single solution fits all contexts", the study's final report said.
[4]
Deepfake and nudification apps to be 'stopped at the source'
Gift 5 articles to anyone you choose each month when you subscribe. The Albanese government will seek to force tech companies operating in Australia to take responsibility for allowing access to artificial intelligence applications that enable illegal deepfakes and "nudification". Communication Minister Anika Wells announced on Tuesday the government would work with industry stakeholders and advocates to develop policy and legislation putting the onus on tech companies to remove these apps from circulation.
Share
Share
Copy Link
The Australian government announces plans to ban 'nudify' tools and hold tech platforms accountable for preventing user access, as part of a broader strategy to combat online harms and protect citizens, especially children.
The Australian government has announced a significant move to combat the rising threat of AI-generated nude images and online stalking. Communications Minister Anika Wells revealed plans to ban "nudify" tools and hold tech platforms accountable for failing to prevent users from accessing them
1
.Source: Australian Financial Review
'Nudify' or "undress" tools are AI-powered applications that create realistic but fake sexually explicit images of people. Users can upload a clothed photo, which the AI then analyzes to digitally remove the person's clothing
1
. The ease of access and realistic output of these tools have led to significant concerns about their misuse.Source: The Conversation
The issue of AI-generated nude images has grown rapidly:
1
.1
.1
.The legal situation regarding nudify tools is complex:
1
.1
.1
.The Australian government's strategy involves:
1
.1
.1
.2
.Related Stories
Some tech companies have already taken steps:
1
.1
.While the government's approach is a step forward, challenges remain:
1
.1
.1
.The government aims to ensure that legitimate and consent-based AI and online tracking services are not inadvertently impacted by the new legislation
2
. As Australia continues to lead global efforts in curbing internet harm, the effectiveness of these measures will be closely watched by other nations grappling with similar challenges.Summarized by
Navi
[1]
[2]
[3]
[4]
29 Apr 2025•Policy and Regulation
17 Jul 2025•Technology
05 Mar 2025•Policy and Regulation