2 Sources
[1]
Distrust in AI is on the rise - but along with healthy scepticism comes the risk of harm
Some video game players recently criticised the cover art on a new video game for being generated with artificial intelligence (AI). Yet the cover art for Little Droid, which also featured in the game's launch trailer on YouTube, was not concocted by AI. It was, the developers claim, carefully designed by a human artist. Surprised by the attacks on "AI slop", the studio Stamina Zero posted a video showing earlier versions of the artist's handiwork. But while some accepted this evidence, others remained sceptical. In addition, several players felt that even if the Little Droid cover art was human made, it nonetheless resembled AI-generated work. However, some art is deliberately designed to have the futuristic glossy appearance associated with image generators like Midjourney, DALL-E, and Stable Diffusion. It's becoming increasingly easy for images, videos or audio made with AI to be deceptively passed off as authentic or human made. The twist in cases like Little Droid is that what is human or "real" may be incorrectly perceived as machine generated - resulting in misplaced backlash. Such cases highlight the increasing problem of the balance of trust and distrust in the generative AI era. In this new world, both cynicism and gullibility about what we encounter online are potential problems - and can lead to harm. Wrongful accusations This issue extends well beyond gaming. There are growing criticisms of AI being used to generate and publish music on platforms like Spotify. Yet as a result, some indie music artists have been wrongfully accused of generating AI music, resulting in damage to their burgeoning careers as musicians. In 2023, an Australian photographer was wrongly disqualified from a photo contest due to the erroneous judgement her entry was produced by artificial intelligence. Writers, including students submitting essays, can also be falsely accused of sneakily using AI. Currently available AI detection tools are far from foolproof - and some argue they may never be entirely reliable. Recent discussions have drawn attention to common characteristics of AI writing, including the em dash - which, as authors, we often employ ourselves. Given that text from systems like ChatGPT has characteristic features, writers face a difficult decision: should they continue writing in their own style and risk being accused of using AI, or should they try to write differently? Read more: Google's SynthID is the latest tool for catching AI-made content. What is AI 'watermarking' and does it work? The delicate balance of trust and distrust Graphic designers, voice actors and many others are rightly worried about AI replacing them. They are also understandably concerned about tech companies using their labour to train AI models without consent, credit or compensation. There are further ethical concerns that AI-generated images threaten Indigenous inclusion by erasing cultural nuances and challenging Indigenous cultural and intellectual property rights. At the same time, the cases above illustrate the risks of rejecting authentic human effort and creativity due to a false belief it is AI. This too can be unfair. People wrongly accused of using AI can suffer emotional, financial and reputational harm. On the one hand, being fooled that AI content is authentic is a problem. Consider deepfakes, bogus videos and false images of politicians or celebrities. AI content purporting to be real can be linked to scams and dangerous misinformation. On the other hand, mistakenly distrusting authentic content is also a problem. For example, rejecting the authenticity of a video of war crimes or hate speech by politicians - based on the mistaken or deliberate belief that the content was AI generated - can lead to great harm and injustice. Unfortunately, the growth of dubious content allows unscrupulous individuals to claim that video, audio or images exposing real wrongdoing are fake. As distrust increases, democracy and social cohesion may begin to fray. Given the potential consequences, we must be wary of excessive scepticism about the origin or provenance of online content. A path forward AI is a cultural and social technology. It mediates and shapes our relationships with one another, and has potentially transformational effects on how we learn and share information. The fact that AI is challenging our trust relationships with companies, content and each other is not surprising. And people are not always to blame when they are fooled by AI-manufactured material. Such outputs are increasingly realistic. Furthermore, the responsibility to avoid deception should not fall entirely on internet users and the public. Digital platforms, AI developers, tech companies and producers of AI material should be held accountable through regulation and transparency requirements around AI use. Even so, internet users will still need to adapt. The need to exercise a balanced and fair sense of scepticism toward online material is becoming more urgent. This means adopting the right level of trust and distrust in digital environments. The philosopher Aristotle spoke of practical wisdom. Through experience, education and practice, a practically wise person develops skills to judge well in life. Because they tend to avoid poor judgement, including excessive scepticism and naivete, the practically wise person is better able to flourish and do well by others. We need to hold tech companies and platforms to account for harm and deception caused by AI. We also need to educate ourselves, our communities, and the next generation to judge well and develop some practical wisdom in a world awash with AI content.
[2]
Distrust in AI is on the rise -- but along with healthy skepticism comes the risk of harm
Some video game players recently criticized the cover art on a new video game for being generated with artificial intelligence (AI). Yet the cover art for Little Droid, which also featured in the game's launch trailer on YouTube, was not concocted by AI. It was, the developers claim, carefully designed by a human artist. Surprised by the attacks on "AI slop," the studio Stamina Zero posted a video showing earlier versions of the artist's handiwork. But while some accepted this evidence, others remained skeptical. In addition, several players felt that even if the Little Droid cover art was human made, it nonetheless resembled AI-generated work. However, some art is deliberately designed to have the futuristic glossy appearance associated with image generators like Midjourney, DALL-E, and Stable Diffusion. It's becoming increasingly easy for images, videos or audio made with AI to be deceptively passed off as authentic or human made. The twist in cases like Little Droid is that what is human or "real" may be incorrectly perceived as machine generated -- resulting in misplaced backlash. Such cases highlight the increasing problem of the balance of trust and distrust in the generative AI era. In this new world, both cynicism and gullibility about what we encounter online are potential problems -- and can lead to harm. Wrongful accusations This issue extends well beyond gaming. There are growing criticisms of AI being used to generate and publish music on platforms like Spotify. Yet as a result, some indie music artists have been wrongfully accused of generating AI music, resulting in damage to their burgeoning careers as musicians. In 2023, an Australian photographer was wrongly disqualified from a photo contest due to the erroneous judgment her entry was produced by artificial intelligence. Writers, including students submitting essays, can also be falsely accused of sneakily using AI. Currently available AI detection tools are far from foolproof -- and some argue they may never be entirely reliable. Recent discussions have drawn attention to common characteristics of AI writing, including the em dash -- which, as authors, we often employ ourselves. Given that text from systems like ChatGPT has characteristic features, writers face a difficult decision: should they continue writing in their own style and risk being accused of using AI, or should they try to write differently? The delicate balance of trust and distrust Graphic designers, voice actors and many others are rightly worried about AI replacing them. They are also understandably concerned about tech companies using their labor to train AI models without consent, credit or compensation. There are further ethical concerns that AI-generated images threaten Indigenous inclusion by erasing cultural nuances and challenging Indigenous cultural and intellectual property rights. At the same time, the cases above illustrate the risks of rejecting authentic human effort and creativity due to a false belief it is AI. This too can be unfair. People wrongly accused of using AI can suffer emotional, financial and reputational harm. On the one hand, being fooled that AI content is authentic is a problem. Consider deepfakes, bogus videos and false images of politicians or celebrities. AI content purporting to be real can be linked to scams and dangerous misinformation. On the other hand, mistakenly distrusting authentic content is also a problem. For example, rejecting the authenticity of a video of war crimes or hate speech by politicians -- based on the mistaken or deliberate belief that the content was AI generated -- can lead to great harm and injustice. Unfortunately, the growth of dubious content allows unscrupulous individuals to claim that video, audio or images exposing real wrongdoing are fake. As distrust increases, democracy and social cohesion may begin to fray. Given the potential consequences, we must be wary of excessive skepticism about the origin or provenance of online content. A path forward AI is a cultural and social technology. It mediates and shapes our relationships with one another, and has potentially transformational effects on how we learn and share information. The fact that AI is challenging our trust relationships with companies, content and each other is not surprising. And people are not always to blame when they are fooled by AI-manufactured material. Such outputs are increasingly realistic. Furthermore, the responsibility to avoid deception should not fall entirely on internet users and the public. Digital platforms, AI developers, tech companies and producers of AI material should be held accountable through regulation and transparency requirements around AI use. Even so, internet users will still need to adapt. The need to exercise a balanced and fair sense of skepticism toward online material is becoming more urgent. This means adopting the right level of trust and distrust in digital environments. The philosopher Aristotle spoke of practical wisdom. Through experience, education and practice, a practically wise person develops skills to judge well in life. Because they tend to avoid poor judgment, including excessive skepticism and naivete, the practically wise person is better able to flourish and do well by others. We need to hold tech companies and platforms to account for harm and deception caused by AI. We also need to educate ourselves, our communities, and the next generation to judge well and develop some practical wisdom in a world awash with AI content.
Share
Copy Link
As AI-generated content becomes more prevalent, a new challenge emerges: distinguishing between authentic human creations and AI-generated work. This article explores the rising distrust in AI and its implications for creators, consumers, and society at large.
In an era where artificial intelligence (AI) is becoming increasingly sophisticated, a new challenge has emerged: distinguishing between authentic human creations and AI-generated work. This growing concern has led to a rise in distrust towards AI-generated content, sometimes with unintended consequences for human creators 12.
A recent incident involving the video game "Little Droid" highlights the complexity of this issue. The game's cover art, created by a human artist, was criticized by players who mistakenly believed it to be AI-generated. Despite the studio's efforts to prove the art's human origin, some skeptics remained unconvinced 12.
Source: The Conversation
This case exemplifies a growing trend where human-created content is incorrectly perceived as AI-generated, leading to misplaced backlash and highlighting the delicate balance between trust and distrust in the AI era.
The problem extends beyond gaming, affecting various creative fields:
While healthy skepticism towards AI-generated content is necessary, excessive distrust can lead to harmful consequences:
As AI continues to shape our relationships and information-sharing practices, finding the right balance of trust and distrust becomes crucial:
Source: Tech Xplore
To address these challenges, a multi-faceted approach is necessary:
As AI continues to evolve, society must strike a delicate balance between healthy skepticism and open-mindedness. By developing critical thinking skills and holding AI developers accountable, we can navigate the complex landscape of human and machine-generated content while preserving the value of authentic human creativity.
Microsoft announces layoffs of 9,000 employees across various divisions, including Xbox, as it continues to invest heavily in AI technology and streamline operations.
17 Sources
Business and Economy
14 hrs ago
17 Sources
Business and Economy
14 hrs ago
Silicon Valley investor Vinod Khosla forecasts massive job automation and economic shifts due to AI advancements, predicting an era of abundance by 2040.
3 Sources
Technology
22 hrs ago
3 Sources
Technology
22 hrs ago
Nvidia surpasses Microsoft in market capitalization, reaching $3.86 trillion, as AI chip demand surges. Other tech giants also see significant growth, while Tesla faces challenges.
4 Sources
Business and Economy
22 hrs ago
4 Sources
Business and Economy
22 hrs ago
Bank of America reports that autonomous vehicles are experiencing their 'ChatGPT moment', with breakthroughs in AI and computing driving rapid commercial deployment. The market is estimated to reach $1.2 trillion by 2040, encompassing cars, trucks, and other sectors.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago
Perplexity introduces a $200 monthly 'Max' subscription plan, offering unlimited access to advanced AI tools and early feature access, as it competes in the growing AI search market.
5 Sources
Business and Economy
14 hrs ago
5 Sources
Business and Economy
14 hrs ago