Curated by THEOUTPOST
On Thu, 5 Sept, 12:02 AM UTC
7 Sources
[1]
Australia plans AI rules on human oversight, transparency
Australia's centre-left government plans to introduce targeted AI rules with human intervention and transparency amid rapid AI tool adoption. Industry Minister Ed Husic announced 10 new voluntary guidelines and a consultation on making them mandatory in high-risk settings, emphasizing the need for protections against unintended consequences.Australia's centre-left government said on Thursday it planned to introduce targeted artificial intelligence rules including human intervention and transparency amid a rapid rollout of AI tools by businesses and in everyday life. Industry and Science Minister Ed Husic unveiled 10 new voluntary guidelines on AI systems and said the government has opened a month-long consultation over whether to make them mandatory in the future in high-risk settings. "Australians know AI can do great things but people want to know there are protections in place if things go off the rails," Husic said in a statement. "Australians want stronger protections on AI, we've heard that, we've listened." The report containing the guidelines said it was critical to enable human control as required across an AI system's lifecycle. "Meaningful human oversight will let you intervene if you need to and reduce the potential for unintended consequences and harms," the report said. Companies must be transparent to disclose AI's role when generating content, it added. Regulators around the world have raised concerns about misinformation and fake news contributed by AI tools amid the rising popularity of generative AI systems such as Microsoft-backed OpenAI's ChatGPT and Google's Gemini. As a result, the European Union in May passed landmark AI laws, imposing strict transparency obligations on high-risk AI systems that are more comprehensive than a light-touch voluntary compliance approach in several countries. "We don't think that there is a right to self-regulation any more. I think we've passed that threshold," Husic told ABC News. Australia has no specific laws to regulate AI, though in 2019 it introduced eight voluntary principles for its responsible use. A government report published this year said the current settings were not adequate enough to tackle high-risk scenarios. Husic said only one-third of businesses using AI were implementing it responsibly on metrics such as safety, fairness, accountability and transparency. "Artificial intelligence is expected to create up to 200,000 jobs in Australia by 2030 ... so it's crucial that Australian businesses are equipped to properly develop and use the technology," he said.
[2]
Australia Plans AI Rules on Human Oversight, Transparency
SYDNEY (Reuters) - Australia's centre-left government said on Thursday it planned to introduce targeted artificial intelligence rules including human intervention and transparency amid a rapid rollout of AI tools by businesses and in everyday life. Industry and Science Minister Ed Husic unveiled 10 new voluntary guidelines on AI systems and said the government has opened a month-long consultation over whether to make them mandatory in the future in high-risk settings. "Australians know AI can do great things but people want to know there are protections in place if things go off the rails," Husic said in a statement. "Australians want stronger protections on AI, we've heard that, we've listened." The report containing the guidelines said it was critical to enable human control as required across an AI system's lifecycle. "Meaningful human oversight will let you intervene if you need to and reduce the potential for unintended consequences and harms," the report said. Companies must be transparent to disclose AI's role when generating content, it added. Regulators around the world have raised concerns about misinformation and fake news contributed by AI tools amid the rising popularity of generative AI systems such as Microsoft-backed OpenAI's ChatGPT and Google's Gemini. As a result, the European Union in May passed landmark AI laws, imposing strict transparency obligations on high-risk AI systems that are more comprehensive than a light-touch voluntary compliance approach in several countries. "We don't think that there is a right to self-regulation any more. I think we've passed that threshold," Husic told ABC News. Australia has no specific laws to regulate AI, though in 2019 it introduced eight voluntary principles for its responsible use. A government report published this year said the current settings were not adequate enough to tackle high-risk scenarios. Husic said only one-third of businesses using AI were implementing it responsibly on metrics such as safety, fairness, accountability and transparency. "Artificial intelligence is expected to create up to 200,000 jobs in Australia by 2030 ... so it's crucial that Australian businesses are equipped to properly develop and use the technology," he said. (Reporting by Renju Jose in Sydney; Editing by Jamie Freed)
[3]
Australia plans AI rules on human oversight, transparency
SYDNEY - Australia's centre-left government said on Thursday it planned to introduce targeted artificial intelligence rules including human intervention and transparency amid a rapid rollout of AI tools by businesses and in everyday life. Industry and Science Minister Ed Husic unveiled 10 new voluntary guidelines on AI systems and said the government has opened a month-long consultation over whether to make them mandatory in the future in high-risk settings. "Australians know AI can do great things but people want to know there are protections in place if things go off the rails," Husic said in a statement. "Australians want stronger protections on AI, we've heard that, we've listened." The report containing the guidelines said it was critical to enable human control as required across an AI system's lifecycle. "Meaningful human oversight will let you intervene if you need to and reduce the potential for unintended consequences and harms," the report said. Companies must be transparent to disclose AI's role when generating content, it added. Regulators around the world have raised concerns about misinformation and fake news contributed by AI tools amid the rising popularity of generative AI systems such as Microsoft-backed OpenAI's ChatGPT and Google's Gemini. As a result, the European Union in May passed landmark AI laws, imposing strict transparency obligations on high-risk AI systems that are more comprehensive than a light-touch voluntary compliance approach in several countries. "We don't think that there is a right to self-regulation any more. I think we've passed that threshold," Husic told ABC News. Australia has no specific laws to regulate AI, though in 2019 it introduced eight voluntary principles for its responsible use. A government report published this year said the current settings were not adequate enough to tackle high-risk scenarios. Husic said only one-third of businesses using AI were implementing it responsibly on metrics such as safety, fairness, accountability and transparency. "Artificial intelligence is expected to create up to 200,000 jobs in Australia by 2030 ... so it's crucial that Australian businesses are equipped to properly develop and use the technology," he said. (Reporting by Renju Jose in Sydney; Editing by Jamie Freed)
[4]
Australia plans to introduce AI rules on human oversight, transparency
Industry and Science Minister Ed Husic unveiled 10 new voluntary guidelines on AI systems and said the government has opened a month-long consultation over whether to make them mandatory in the future in high-risk settings. "Australians know AI can do great things but people want to know there are protections in place if things go off the rails," Husic said in a statement. "Australians want stronger protections on AI, we've heard that, we've listened." The report containing the guidelines said it was critical to enable human control as required across an AI system's lifecycle.
[5]
Australia to Propose Mandatory Guardrails for AI Development
Australia's government is releasing potential mandatory guardrails for high-risk research and development of artificial intelligence, including establishing "meaningful" human oversight and ensuring any AI-created content is clearly labeled. Industry and Science Minister Ed Husic will unveil 10 potential mandatory guidelines for consultation on Thursday in Canberra, while introducing a voluntary safety standard effective immediately. A report by Australia's Tech Council estimated that generative AI could be worth as much as A$115 billion ($77.2 billion) annually to the nation's economy by 2030.
[6]
Labor considers an Artificial Intelligence Act to impose 'mandatory guardrails' on use of AI
'We need more people to use AI and to do that we need to build trust', says industry and science minister Ed Husic The Australian government is considering a European Union style Artificial Intelligence Act to regulate minimum standards on high-risk AI across the whole economy. On Wednesday the industry and science minister, Ed Husic, released a discussion paper proposing 10 "mandatory guardrails" for high-risk AI including human oversight and the ability to challenge the use of AI or outcomes of automated decision-making. In a statement, Husic said Australians "know AI can do great things" but they also wanted to know protections are in place "if things go off the rails". "We need more people to use AI and to do that we need to build trust" by creating a regulatory framework, he said. Long the stuff of science fiction, generative artificial intelligence - which can create new content such as text, images, audio and code - has experienced a surge in uptake through "large language model" programs such as ChatGPT, Google's chatbot Bard and Microsoft Bing's chat feature. AI is already throwing up challenges to lawmakers and consumers, from misleading people on prices, alleged defamation from false claims served up by chat programs to warnings from the eSafety commissioner that generative AI could automate child grooming by predators. The paper notes while AI can "improve wellbeing and quality of life" its potential for harms include creating and amplifying bias; physical or psychological injury; breach of privacy; and threatening national security through information manipulation and malicious cyber activity. The paper proposes to define high-risk AI to "intended and foreseeable uses" and the "unforeseeable risks" created by general-purpose AI which "can be applied in contexts they were not originally designed for". Under the 10 guardrails, organisations developing or deploying high-risk AI would need to establish risk management processes; test AI and monitor once deployed; enable human control or intervention to achieve meaningful human oversight; inform end-users of AI decisions, interactions and content; and establish processes for people affected by AI systems to challenge use or outcomes. Organisations should also be required to "keep and maintain records to allow third parties to assess compliance with guardrails" and undertake tests to certify their own compliance, it said. Once settled, the guardrails could be applied within existing regulations, through amendments to legislation in separate domains or through "a new cross-economy AI Act", as the EU moved to do in June. That option would enable the creation of an independent AI regulator although this would "take resources and time", the paper said. Husic said Australians want stronger protections on AI and businesses had also been calling for greater clarity around using AI safely. "From today, we're starting to put those protections in place," he said. The paper noted the government plans to "strengthen privacy protections, transparency and accountability", a reference to the privacy law reform package expected to be unveiled as early as next week. The attorney general's department is also working on a "whole of government legal framework to support use of automated decision-making systems", which was a recommendation from the robodebt royal commission.
[7]
Australia plans AI rules on human oversight, transparency
The report containing the guidelines said it was critical to enable human control as required across an AI system's lifecycle. "Meaningful human oversight will let you intervene if you need to and reduce the potential for unintended consequences and harms," the report said. Companies must be transparent to disclose AI's role when generating content, it added. Regulators around the world have raised concerns about misinformation and fake news contributed by AI tools amid the rising popularity of generative AI systems such as Microsoft-backed OpenAI's ChatGPT and Google's Gemini. As a result, the European Union in May passed landmark AI laws, imposing strict transparency obligations on high-risk AI systems that are more comprehensive than a light-touch voluntary compliance approach in several countries. "We don't think that there is a right to self-regulation any more. I think we've passed that threshold," Husic told ABC News. Australia has no specific laws to regulate AI, though in 2019 it introduced eight voluntary principles for its responsible use. A government report published this year said the current settings were not adequate enough to tackle high-risk scenarios. Husic said only one-third of businesses using AI were implementing it responsibly on metrics such as safety, fairness, accountability and transparency. "Artificial intelligence is expected to create up to 200,000 jobs in Australia by 2030 ... so it's crucial that Australian businesses are equipped to properly develop and use the technology," he said. (Reporting by Renju Jose in Sydney; Editing by Jamie Freed)
Share
Share
Copy Link
Australia unveils plans for new artificial intelligence regulations, emphasizing human oversight and transparency. The proposed rules aim to address potential risks and ensure responsible AI development.
In a significant move towards regulating artificial intelligence (AI), Australia has announced plans to introduce new rules focusing on human oversight and transparency in AI systems. The proposed regulations, unveiled by Industry Minister Ed Husic, aim to address the potential risks associated with AI while fostering innovation in the rapidly evolving field 1.
The Australian government's approach to AI regulation centers on several key areas:
Human Oversight: The rules will require human oversight of AI systems, ensuring that critical decisions are not left solely to machines 2.
Transparency: AI developers and users will be required to provide clear information about how their systems work, promoting accountability and trust 3.
Fairness and Non-Discrimination: The regulations will address concerns about bias in AI systems, ensuring fair treatment across diverse populations 4.
Privacy Protection: Safeguards will be put in place to protect individuals' privacy when their data is used in AI applications 5.
Minister Husic emphasized the importance of industry consultation in developing these regulations. The government plans to work closely with AI developers, businesses, and other stakeholders to ensure the rules are practical and effective. This collaborative approach aims to strike a balance between innovation and responsible AI development 1.
Australia's move aligns with similar efforts by other countries and regions, such as the European Union's AI Act. The proposed regulations reflect a growing global consensus on the need for guardrails in AI development and deployment. By taking this step, Australia aims to position itself as a leader in responsible AI innovation while addressing potential risks 5.
The Australian government plans to release a discussion paper outlining the proposed regulations in the coming months. This will be followed by a period of public consultation, allowing various stakeholders to provide input and feedback. The final regulations are expected to be implemented in the near future, with the exact timeline yet to be announced 2.
While some industry players may view new regulations as potential obstacles, the Australian government argues that clear rules will actually encourage innovation by providing a stable and trustworthy environment for AI development. Minister Husic stated that the aim is to ensure AI systems are "safe, responsible and reliable," fostering public trust in the technology 3.
Reference
[1]
[2]
[4]
[5]
Australia's approach to AI regulation is compared to its successful gene technology regulation model, highlighting the need for stronger oversight and public safety measures in AI development and deployment.
2 Sources
2 Sources
Chinese AI company DeepSeek's recent advancements have sparked concerns about China's growing dominance in AI technology, prompting calls for a strategic response from Western nations, including Australia.
2 Sources
2 Sources
As the Paris AI summit approaches, countries worldwide are at various stages of regulating artificial intelligence, from the US's "Wild West" approach to the EU's comprehensive rules.
3 Sources
3 Sources
As AI rapidly advances, experts and policymakers stress the critical need for a global governance framework to ensure responsible development and implementation.
2 Sources
2 Sources
A recent survey reveals widespread apprehension among Australians regarding artificial intelligence. The study emphasizes the crucial role of media literacy in addressing these concerns and navigating the evolving AI landscape.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved