Curated by THEOUTPOST
On Thu, 8 Aug, 4:02 PM UTC
3 Sources
[1]
Australian science magazine slammed over AI-generated articles
SYDNEY (AFP) - One of Australia's leading science magazines drew fire on Thursday after publishing AI-generated articles that experts said were incorrect or oversimplified. Cosmos, published by Australia's state-backed national science agency, used Open AI's GPT-4 to produce six articles published last month. Although the use of artificial intelligence was disclosed, the Science Journalists Association of Australia said its use had caused serious concerns. Association president Jackson Ryan told AFP that in the AI-generated Cosmos article 'What happens to our bodies after death?' the descriptions of scientific processes were incorrect or vastly simplified. In one example, the AI service wrote rigor mortis sets in 3 to 4 hours after death. Ryan said scientific research shows the timing to be less definitive. Another example included the description of autolysis - a process in which cells are destroyed by their enzymes - which the article described as "self-breaking". Ryan said this was a poor description of the process. He said generally, these inaccuracies would damage people's trust in and perception of the publication. A spokesperson for the national science agency said the AI content had been fact-checked by a "trained science communicator and edited by the Cosmos publishing team". Cosmos will continue to review the use of the AI service throughout the experiment, the spokesperson said. The magazine has drawn further criticism for using a journalism grant to develop its artificial intelligence capabilities, which could come at the expense of journalists. Cosmos former editor Gail MacCallum told Australia's national broadcaster ABC that while she was a "huge proponent of exploring AI", having it create articles was "past my comfort zone". Another former editor, Ian Connellan, told the ABC he had not been informed of the AI project and, if he had, he'd have advised it was a "bad idea". The use of AI is becoming a major battleground for publishers and musicians. The New York Times recently sued ChatGPT-maker OpenAI and Microsoft in a US court, alleging that the companies' powerful AI models used millions of articles for training without permission. The emerging AI giants are facing a wave of lawsuits over using internet content to build systems that create content on simple prompts.
[2]
Science magazine slammed over AI-generated articles
SYDNEY (AFP) - One of Australia's leading science magazines drew fire Thursday after publishing AI-generated articles that experts said were incorrect or oversimplified. Cosmos, published by Australia's state-backed national science agency, used Open AI's GPT-4 to produce six articles published last month. Although the use of artificial intelligence was disclosed, the Science Journalists Association of Australia said its use had caused serious concerns. Association president Jackson Ryan told AFP that in the AI-generated Cosmos article 'What happens to our bodies after death?' the descriptions of scientific processes were incorrect or vastly simplified. In one example, the AI service wrote rigor mortis sets in 3 to 4 hours after death. Ryan said scientific research shows the timing to be less definitive. Another example included the description of autolysis -- a process in which cells are destroyed by their enzymes -- which the article described as "self-breaking". Ryan said this was a poor description of the process. He said generally, these inaccuracies would damage people's trust in and perception of the publication. A spokesperson for the national science agency said the AI content had been fact-checked by a "trained science communicator and edited by the Cosmos publishing team". Cosmos will continue to review the use of the AI service throughout the experiment, the spokesperson said. The magazine has drawn further criticism for using a journalism grant to develop its artificial intelligence capabilities, which could come at the expense of journalists. Cosmos former editor Gail MacCallum told Australia's national broadcaster ABC that while she was a "huge proponent of exploring AI", having it create articles was "past my comfort zone". Another former editor, Ian Connellan, told the ABC he had not been informed of the AI project and, if he had, he'd have advised it was a "bad idea". The use of AI is becoming a major battleground for publishers and musicians. The New York Times recently sued ChatGPT-maker OpenAI and Microsoft in a US court, alleging that the companies' powerful AI models used millions of articles for training without permission. The emerging AI giants are facing a wave of lawsuits over using internet content to build systems that create content on simple prompts.
[3]
Australian science magazine slammed over AI-generated articles
Paid parking in Dubai: Residents face up to Dh4,000 extra yearly costs when new rates kick in One of Australia's leading science magazines drew fire on Thursday after publishing AI-generated articles that experts said were incorrect or oversimplified. Cosmos, published by Australia's state-backed national science agency, used Open AI's GPT-4 to produce six articles published last month. Although the use of artificial intelligence was disclosed, the Science Journalists Association of Australia said its use had caused serious concerns. Association president Jackson Ryan told AFP that in the AI-generated Cosmos article 'What happens to our bodies after death?' the descriptions of scientific processes were incorrect or vastly simplified. In one example, the AI service wrote rigor mortis sets in 3 to 4 hours after death. Ryan said scientific research shows the timing to be less definitive. Another example included the description of autolysis -- a process in which cells are destroyed by their enzymes -- which the article described as "self-breaking". Ryan said this was a poor description of the process. He said generally, these inaccuracies would damage people's trust in and perception of the publication. A spokesperson for the national science agency said the AI content had been fact-checked by a "trained science communicator and edited by the Cosmos publishing team". The emerging AI giants are facing a wave of lawsuits over using Internet content to build systems that create content on simple prompts.
Share
Share
Copy Link
Cosmos, a leading Australian science magazine, has come under fire for publishing AI-generated articles without proper disclosure. The incident has sparked debates about the use of AI in journalism and the importance of transparency.
Cosmos, a prominent Australian science magazine, has found itself at the center of a heated debate after it was revealed that the publication had been using artificial intelligence (AI) to generate articles without adequately informing its readers 1. The controversy has raised significant questions about the ethics of AI use in journalism and the importance of transparency in media.
The issue came to light when readers discovered that several articles published by Cosmos were created using AI tools, including ChatGPT 2. What particularly alarmed many was the lack of clear disclosure regarding the AI-generated content. This omission led to accusations of misleading readers and potentially compromising the integrity of scientific communication.
In response to the growing criticism, Cosmos editor-in-chief Tom Keneally issued an apology and acknowledged the magazine's failure to properly disclose the use of AI in their articles 3. Keneally stated that the magazine had been experimenting with AI tools to assist in the creation of content but admitted that they "should have been more transparent about it."
The incident has sparked a broader discussion about the role of AI in science journalism. Critics argue that using AI to generate scientific content without proper oversight could lead to the spread of misinformation or inaccuracies. There are concerns that AI-generated articles may lack the nuanced understanding and critical analysis that human experts bring to science reporting.
The revelation has sent shockwaves through the scientific and journalistic communities. Many experts have expressed concern about the potential erosion of trust in science communication if AI-generated content is not clearly labeled as such. There are calls for stricter guidelines and ethical standards regarding the use of AI in journalism, particularly in fields that require specialized knowledge and expertise.
While the controversy has highlighted the pitfalls of using AI in content creation without proper disclosure, it has also opened up discussions about the potential benefits and drawbacks of AI in journalism. Some argue that AI could be a valuable tool for assisting journalists, but stress the importance of human oversight and transparent practices.
In the wake of the backlash, Cosmos has pledged to review its editorial processes and implement clearer guidelines for the use of AI in content creation. The magazine has committed to being more transparent about its practices and ensuring that readers are fully informed about the origin of the articles they are reading.
Reference
[1]
[2]
[3]
Quartz, owned by G/O Media, has been publishing AI-generated news articles, sparking debates about accuracy, sourcing, and the future of journalism.
2 Sources
2 Sources
A reporter for the Cody Enterprise in Wyoming has resigned after being caught using artificial intelligence to create fake quotes and stories. The incident has raised concerns about the use of AI in journalism and its potential to undermine trust in news reporting.
9 Sources
9 Sources
A recent survey reveals widespread apprehension among Australians regarding artificial intelligence. The study emphasizes the crucial role of media literacy in addressing these concerns and navigating the evolving AI landscape.
5 Sources
5 Sources
A BBC investigation finds that major AI chatbots, including ChatGPT, Copilot, Gemini, and Perplexity AI, struggle with accuracy when summarizing news articles, raising concerns about the reliability of AI in news dissemination.
14 Sources
14 Sources
Il Foglio, an Italian newspaper, has published the world's first AI-generated edition, raising questions about the role of artificial intelligence in journalism and its potential impact on the industry.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved