Curated by THEOUTPOST
On Mon, 19 Aug, 12:00 AM UTC
5 Sources
[1]
Most Australians are worried about artificial intelligence, new survey shows. Improved media literacy is vital
Queensland University of Technology, Western Sydney University, and University of Canberra provide funding as members of The Conversation AU. After becoming mainstream in 2023, generative artificial intelligence (AI) is now transforming the way we live. This technology is a type of AI which can generate text, images and other content in response to prompts. In particular, it has transformed the way we consume and create information and media. For example, millions of people now use the technology to summarise lengthy documents, draft emails and increase their productivity at work. Newsrooms have also started experimenting with generative AI, and film companies are using it to create actor digital doubles and even "digital clones" of actors who have died. These transformations are bound to increase in the coming months and years. So too are the many concerns and controversies surrounding the use of generative AI. In the face of these complex and rapid developments, we surveyed more than 4,000 Australians to better understand their experiences with and attitudes toward generative AI. Released today, our results paint a complicated picture - and underscore the vital importance of improved media literacy programs. Who is using generative AI in Australia? Between January and April this year we surveyed a representative sample of 4,442 adult Australians. We asked people a range of questions about their media use, attitudes and abilities including a series of questions about generative AI. Just under four in ten (39%) adults have experience using text-based generative AI services such as ChatGPT or Bard. Of this group, 13% are using these services regularly and 26% have tried them. An additional three in ten (29%) adults know of these services but have not used them, while 26% are not at all familiar with these services. Far fewer Australians are using image-focused generative AI services such as Midjourney or DALL-E. These kinds of services can be used to create illustrations or artworks, adjust or alter photographs or design posters. Only 3% are using these services regularly and 13% have experimented or tried using them. Half (50%) of adults are not at all familiar with image-based AI services while 28% have heard of these services but have not used them. Some groups are much more likely to be using generative AI. Regular use is strongly correlated with age. For example, younger adults are much more likely to be regularly using generative AI than older adults. Adults with a high level of education are also much more likely to be using this technology, as are people with a high household income. Australians are worried about generative AI Many Australians believe generative AI could make their lives better. But more Australians agree generative AI will harm Australian society (40%) than disagree with this (16%). This is perhaps why almost three quarters (74%) of adult Australians believe laws and regulations are needed to manage risks associated with generative AI. Just one in five (22%) adults are confident about using generative AI tools, although 46% say they want to learn more about it. Significantly, many people said they don't know how they feel about generative AI. This indicates many Australians don't yet know enough about this technology to make informed decisions about its use. The role for media literacy Our survey shows the more confident people are about their media abilities, the more likely they are to be aware of generative AI and confident using it. Adult media literacy programs and resources can be used to increase people's media knowledge and ability. These programs can be created and delivered online and in person by public broadcasters and other media organisations, universities, community organisations, libraries and museums. Media literacy is widely recognised as being essential for full participation in society. A media literate person is able to create, use and share a diverse range of media while critically analysing their media engagement. Our research shows there is a need for new media literacy resources to ensure Australians are able to make informed decisions about generative AI. For example, this kind of education is crucial for adults to develop their digital know-how so they can determine if images are real and can be trusted. In addition, media literacy can show people how to apply critical thinking to respond to generative AI. For example, if a person uses an AI tool to generate images, they should ask themselves: why has the AI tool created the image in this way and does it create social stereotypes or biases? could I use a different prompt to encourage the AI to create a more accurate or fairer representation? what would happen if I experimented with different AI tools to create the image? how can I use the advanced features within an AI tool to refine my image to produce a more satisfactory result? what kind of data has the AI been "trained on" to produce this kind of image? Without interventions, emerging technologies such as generative AI will widen existing gaps between those with a low and high level of confidence in their media ability. It's therefore urgent for Australian governments to provide appropriate funding for media literacy resources and programs. This will help ensure all citizens can respond to the ever-changing digital media landscape - and fully participate in contemporary society.
[2]
Most Australians worried about AI, new survey shows, media literacy vital
Our research shows there is a need for new media literacy resources to ensure Australians are able to make informed decisions about generative AI. After becoming mainstream in 2023, generative artificial intelligence (AI) is now transforming the way we live. This technology is a type of AI which can generate text, images and other content in response to prompts. In particular, it has transformed the way we consume and create information and media. For example, millions of people now use the technology to summarise lengthy documents, draft emails and increase their productivity at work. Newsrooms have also started experimenting with generative AI, and film companies are using it to create actor digital doubles and even digital clones of actors who have died. These transformations are bound to increase in the coming months and years. So too are the many concerns and controversies surrounding the use of generative AI. In the face of these complex and rapid developments, we surveyed more than 4,000 Australians to better understand their experiences with and attitudes toward generative AI. Released today, our results paint a complicated picture and underscore the vital importance of improved media literacy programs. Who is using generative AI in Australia? Between January and April this year we surveyed a representative sample of 4,442 adult Australians. We asked people a range of questions about their media use, attitudes and abilities including a series of questions about generative AI. Just under four in ten (39 per cent) adults have experience using text-based generative AI services such as ChatGPT or Bard. Of this group, 13 per cent are using these services regularly and 26 per cent have tried them. An additional three in ten (29 per cent) adults know of these services but have not used them, while 26 per cent are not at all familiar with these services. Far fewer Australians are using image-focused generative AI services such as Midjourney or DALL-E. These kinds of services can be used to create illustrations or artworks, adjust or alter photographs or design posters. Only 3 per cent are using these services regularly and 13 per cent have experimented or tried using them. Half (50 per cent) of adults are not at all familiar with image-based AI services while 28 per cent have heard of these services but have not used them. Some groups are much more likely to be using generative AI. Regular use is strongly correlated with age. For example, younger adults are much more likely to be regularly using generative AI than older adults. Adults with a high level of education are also much more likely to be using this technology, as are people with a high household income. Australians are worried about generative AI Many Australians believe generative AI could make their lives better. But more Australians agree generative AI will harm Australian society (40 per cent) than disagree with this (16 per cent). This is perhaps why almost three quarters (74 per cent) of adult Australians believe laws and regulations are needed to manage risks associated with generative AI. Just one in five (22 per cent) adults are confident about using generative AI tools, although 46 per cent say they want to learn more about it. Significantly, many people said they don't know how they feel about generative AI. This indicates many Australians don't yet know enough about this technology to make informed decisions about its use. The role for media literacy Our survey shows the more confident people are about their media abilities, the more likely they are to be aware of generative AI and confident using it. Adult media literacy programmes and resources can be used to increase people's media knowledge and ability. These programs can be created and delivered online and in person by public broadcasters and other media organisations, universities, community organisations, libraries and museums. Media literacy is widely recognised as being essential for full participation in society. A media literate person is able to create, use and share a diverse range of media while critically analysing their media engagement. Our research shows there is a need for new media literacy resources to ensure Australians are able to make informed decisions about generative AI. For example, this kind of education is crucial for adults to develop their digital know-how so they can determine if images are real and can be trusted. In addition, media literacy can show people how to apply critical thinking to respond to generative AI. For example, if a person uses an AI tool to generate images, they should ask themselves: 1) why has the AI tool created the image in this way and does it create social stereotypes or biases? 2) could I use a different prompt to encourage the AI to create a more accurate or fairer representation? 3) what would happen if I experimented with different AI tools to create the image? 4) how can I use the advanced features within an AI tool to refine my image to produce a more satisfactory result? 5) what kind of data has the AI been trained on to produce this kind of image? Without interventions, emerging technologies such as generative AI will widen existing gaps between those with a low and high level of confidence in their media ability. It's therefore urgent for Australian governments to provide appropriate funding for media literacy resources and programs. This will help ensure all citizens can respond to the ever-changing digital media landscape and fully participate in contemporary society.
[3]
Most Australians are worried about artificial intelligence, new survey shows. Improved media literacy is vital
Canberra, After becoming mainstream in 2023, generative artificial intelligence is now transforming the way we live. This technology is a type of AI which can generate text, images and other content in response to prompts. In particular, it has transformed the way we consume and create information and media. For example, millions of people now use the technology to summarise lengthy documents, draft emails and increase their productivity at work. Newsrooms have also started experimenting with generative AI, and film companies are using it to create actor digital doubles and even "digital clones" of actors who have died. These transformations are bound to increase in the coming months and years. So too are the many concerns and controversies surrounding the use of generative AI. In the face of these complex and rapid developments, we surveyed more than 4,000 Australians to better understand their experiences with and attitudes toward generative AI. Released today, our results paint a complicated picture - and underscore the vital importance of improved media literacy programs. Who is using generative AI in Australia? Between January and April this year we surveyed a representative sample of 4,442 adult Australians. We asked people a range of questions about their media use, attitudes and abilities including a series of questions about generative AI. Just under four in ten adults have experience using text-based generative AI services such as ChatGPT or Bard. Of this group, 13 per cent are using these services regularly and 26 per cent have tried them. An additional three in ten adults know of these services but have not used them, while 26 per cent are not at all familiar with these services. Far fewer Australians are using image-focused generative AI services such as Midjourney or DALL-E. These kinds of services can be used to create illustrations or artworks, adjust or alter photographs or design posters. Only 3 per cent are using these services regularly and 13 per cent have experimented or tried using them. Half of adults are not at all familiar with image-based AI services while 28 per cent have heard of these services but have not used them. Some groups are much more likely to be using generative AI. Regular use is strongly correlated with age. For example, younger adults are much more likely to be regularly using generative AI than older adults. Adults with a high level of education are also much more likely to be using this technology, as are people with a high household income. Australians are worried about generative AI Many Australians believe generative AI could make their lives better. But more Australians agree generative AI will harm Australian society than disagree with this . This is perhaps why almost three quarters of adult Australians believe laws and regulations are needed to manage risks associated with generative AI. Just one in five adults are confident about using generative AI tools, although 46 per cent say they want to learn more about it. Significantly, many people said they don't know how they feel about generative AI. This indicates many Australians don't yet know enough about this technology to make informed decisions about its use. The role for media literacy Our survey shows the more confident people are about their media abilities, the more likely they are to be aware of generative AI and confident using it. Adult media literacy programmes and resources can be used to increase people's media knowledge and ability. These programs can be created and delivered online and in person by public broadcasters and other media organisations, universities, community organisations, libraries and museums. Media literacy is widely recognised as being essential for full participation in society. A media literate person is able to create, use and share a diverse range of media while critically analysing their media engagement. Our research shows there is a need for new media literacy resources to ensure Australians are able to make informed decisions about generative AI. For example, this kind of education is crucial for adults to develop their digital know-how so they can determine if images are real and can be trusted. In addition, media literacy can show people how to apply critical thinking to respond to generative AI. For example, if a person uses an AI tool to generate images, they should ask themselves: 1) why has the AI tool created the image in this way and does it create social stereotypes or biases? 2) could I use a different prompt to encourage the AI to create a more accurate or fairer representation? 3) what would happen if I experimented with different AI tools to create the image? 4) how can I use the advanced features within an AI tool to refine my image to produce a more satisfactory result? 5) what kind of data has the AI been "trained on" to produce this kind of image? Without interventions, emerging technologies such as generative AI will widen existing gaps between those with a low and high level of confidence in their media ability. It's therefore urgent for Australian governments to provide appropriate funding for media literacy resources and programs. This will help ensure all citizens can respond to the ever-changing digital media landscape - and fully participate in contemporary society. GRS GRS
[4]
Generative AI hype is ending - and now the technology might actually become useful
University of South Australia provides funding as a member of The Conversation AU. Less than two years ago, the launch of ChatGPT started a generative AI frenzy. Some said the technology would trigger a fourth industrial revolution, completely reshaping the world as we know it. In March 2023, Goldman Sachs predicted 300 million jobs would be lost or degraded due to AI. A huge shift seemed to be underway. Eighteen months later, generative AI is not transforming business. Many projects using the technology are being cancelled, such as an attempt by McDonald's to automate drive-through ordering which went viral on TikTok after producing comical failures. Government efforts to make systems to summarise public submissions and calculate welfare entitlements have met the same fate. So what happened? The AI hype cycle Like many new technologies, generative AI has been following a path known as the Gartner hype cycle, first described by American tech research firm Gartner. This widely used model describes a recurring process in which the initial success of a technology leads to inflated public expectations that eventually fail to be realised. After the early "peak of inflated expectations" comes a "trough of disillusionment", followed by a "slope of enlightenment" which eventually reaches a "plateau of productivity". A Gartner report published in June listed most generative AI technologies as either at the peak of inflated expectations or still going upward. The report argued most of these technologies are two to five years away from becoming fully productive. Many compelling prototypes of generative AI products have been developed, but adopting them in practice has been less successful. A study published last week by American think tank RAND showed 80% of AI projects fail, more than double the rate for non-AI projects. Shortcomings of current generative AI technology The RAND report lists many difficulties with generative AI, ranging from high investment requirements in data and AI infrastructure to a lack of needed human talent. However, the unusual nature of GenAI's limitations represents a critical challenge. For example, generative AI systems can solve some highly complex university admission tests yet fail very simple tasks. This makes it very hard to judge the potential of these technologies, which leads to false confidence. After all, if it can solve complex differential equations or write an essay, it should be able to take simple drive-through orders, right? A recent study showed that the abilities of large language models such as GPT-4 do not always match what people expect of them. In particular, more capable models severely underperformed in high-stakes cases where incorrect responses could be catastrophic. These results suggest these models can induce false confidence in their users. Because they fluently answer questions, humans can reach overoptimistic conclusions about their capabilities and deploy the models in situations they are not suited for. Experience from successful projects shows it is tough to make a generative model follow instructions. For example, Khan Academy's Khanmigo tutoring system often revealed the correct answers to questions despite being instructed not to. So why isn't the generative AI hype over yet? There are a few reasons for this. First, generative AI technology, despite its challenges, is rapidly improving, with scale and size being the primary drivers of the improvement. Research shows that the size of language models (number of parameters), as well as the amount of data and computing power used for training all contribute to improved model performance. In contrast, the architecture of the neural network powering the model seems to have minimal impact. Large language models also display so-called emergent abilities, which are unexpected abilities in tasks for which they haven't been trained. Researchers have reported new capabilities "emerging" when models reach a specific critical "breakthrough" size. Studies have found sufficiently complex large language models can develop the ability to reason by analogy and even reproduce optical illusions like those experienced by humans. The precise causes of these observations are contested, but there is no doubt large language models are becoming more sophisticated. So AI companies are still at work on bigger and more expensive models, and tech companies such as Microsoft and Apple are betting on returns from their existing investments in generative AI. According to one recent estimate, generative AI will need to produce US$600 billion in annual revenue to justify current investments - and this figure is likely to grow to US$1 trillion in the coming years. For the moment, the biggest winner from the generative AI boom is Nvidia, the largest producer of the chips powering the generative AI arms race. As the proverbial shovel-makers in a gold rush, Nvidia recently became the most valuable public company in history, tripling its share price in a single year to reach a valuation of US$3 trillion in June. What comes next? As the AI hype begins to deflate and we move through the period of disillusionment, we are also seeing more realistic AI adoption strategies. First, AI is being used to support humans, rather than replace them. A recent survey of American companies found they are mainly using AI to improve efficiency (49%), reduce labour costs (47%) and enhance the quality of products (58%) Second, we also see a rise in smaller (and cheaper) generative AI models, trained on specific data and deployed locally to reduce costs and optimise efficiency. Even OpenAI, which has led the race for ever-larger models, has released the GPT-4o Mini model to reduce costs and improve performance. Third, we see a strong focus on providing AI literacy training and educating the workforce on how AI works, its potentials and limitations, and best practices for ethical AI use. We are likely to have to learn (and re-learn) how to use different AI technologies for years to come. In the end, the AI revolution will look more like an evolution. Its use will gradually grow over time and, little by little, alter and transform human activities. Which is much better than replacing them.
[5]
Majority of social media users unable to identify AI - report -- RT World News
Growing use of artificial intelligence is outpacing media literacy, a report in Australia has found Media literacy among adults is not keeping pace with the rapid advancement of generative artificial intelligence (AI), according to research published in Australia on Monday. The trend is leaving internet users increasingly vulnerable to misinformation, the authors of the research said. The AI industry exploded in 2022 after the launch of chatbot and virtual assistant ChatGPT by US AI research organization OpenAI. The sector has since attracted billions of dollars in investment, with tech giants such as Google and Microsoft offering tools such as image and text generators. However, users' confidence in their own digital media abilities remains low, according to the 'Adult Media Literacy in 2024' paper by Western Sydney University. In a sample of 4,442 adult Australians, respondents were asked how confident they were to perform a series of 11 media-related tasks that required critical and technical abilities and/or knowledge. On average, respondents said they could complete just four out of the 11 tasks with confidence. The results are "largely unchanged" since 2021, when previous research was conducted, the paper noted. The ability to identify misinformation online has not changed at all, as per research data. In 2021 and in 2024, only 39% of responders said they were confident they could check if information they found online is true. The recent integration of generative AI into online environments makes it "even more difficult for citizens to know who or what to trust online," the report stated. The slow growth in media literacy is particularly concerning given the ability of generative AI tools to produce high-quality deepfakes and misinformation, according to associate professor and research author Tanya Notley, as cited by the Decrypt media company. "It's getting harder and harder to identify where AI has been used. It's going to be used in more sophisticated ways to manipulate people with disinformation, and we can already see that happening," she warned. Combatting this requires regulation, although this is happening slowly, Notley said. Last week, the US Senate passed a bill designed to protect individuals from the non-consensual use of their likeness in AI-generated pornographic content. The bill was adopted following a scandal involving deepfake pornographic images of US pop singer Taylor Swift that spread through social media earlier this year. Australians now favor online content as their source for news and information as opposed to television and print newspapers, the report noted, adding that this represents a "milestone in the way in which Australians are consuming media."
Share
Share
Copy Link
A recent survey reveals widespread apprehension among Australians regarding artificial intelligence. The study emphasizes the crucial role of media literacy in addressing these concerns and navigating the evolving AI landscape.
A recent survey has unveiled significant apprehension among Australians regarding the rapid advancement and potential impacts of artificial intelligence (AI). The study, conducted by researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), found that a majority of Australians express worry about various aspects of AI technology 1.
The survey, which sampled 1,184 Australian adults, revealed that 69% of respondents are concerned about AI's potential to spread misinformation and disinformation. Additionally, 68% worry about AI's impact on employment, while 67% express unease about the technology's ability to make important decisions affecting people's lives 2.
The rise of generative AI, exemplified by tools like ChatGPT, has contributed to these concerns. The survey found that 63% of Australians worry about the use of AI in creating fake images, videos, and audio content, commonly known as "deepfakes" 3.
Researchers emphasize that improved media literacy is crucial in addressing these concerns. Understanding how AI works, its capabilities, and limitations can help individuals navigate the evolving technological landscape more effectively. The study suggests that enhanced media literacy could empower people to critically evaluate AI-generated content and make informed decisions 1.
Despite the current concerns, some experts argue that the initial hype surrounding generative AI is beginning to subside. This transition may lead to more practical and beneficial applications of the technology. As the focus shifts from sensationalism to utility, AI could potentially address real-world problems and enhance various sectors 4.
The concerns expressed by Australians reflect a global trend of apprehension towards AI. In response, various countries are investing in research to better understand and mitigate the potential risks associated with AI technologies. For instance, the Australian government has allocated significant funding to explore the societal impacts of AI and develop strategies to ensure its responsible development and deployment 5.
As AI continues to evolve, addressing public concerns through education, transparent communication, and responsible development will be crucial. Enhancing media literacy, fostering public dialogue, and implementing appropriate regulations can help strike a balance between harnessing the benefits of AI and mitigating its potential risks. The survey's findings serve as a call to action for policymakers, educators, and technology developers to work collaboratively in shaping a future where AI serves society's best interests.
Reference
[1]
[2]
[3]
[4]
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
As software workers show enthusiasm for generative AI in the workplace, businesses are advised to move beyond the hype and focus on practical applications. This story explores the growing excitement around AI tools and the need for strategic implementation.
2 Sources
As AI technology advances, businesses and users face challenges with accuracy and reliability. Experts suggest ways to address gaps in AI performance and human expertise to maximize AI's potential.
2 Sources
Experts raise alarms about the potential limitations and risks associated with large language models (LLMs) in AI. Concerns include data quality, model degradation, and the need for improved AI development practices.
2 Sources
A growing divide emerges as teenagers increasingly adopt AI technologies for schoolwork and creative tasks, often without parental awareness or understanding. This trend raises questions about education, ethics, and the future of learning in the AI era.
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved