2 Sources
[1]
AI is now used for audio description. But it should be accurate and actually useful for people with low vision
https://theconversation.com/ai-is-now-used-for-audio-description-but-it-should-be-accurate-and-actually-useful-for-people-with-low-vision-256808 Since the recent explosion of widely available generative artificial intelligence (AI), it now seems that a new AI tool emerges every week. With varying success, AI offers solutions for productivity, creativity, research, and also accessibility: making products, services and other content more usable for people with disability. The award-winning 2024 Super Bowl ad for Google Pixel 8 is a poignant example of how the latest AI tech can intersect with disability. Directed by blind director Adam Morse, it showcases an AI-powered feature that uses audio cues, haptic feedback (where vibrating sensations communicate information to the user) and animations to assist blind and low-vision users in capturing photos and videos. The ad was applauded for being disability inclusive and representative. It also demonstrated a growing capacity for - and interest in - AI to generate more accessible technology. AI is also poised to challenge how audio description is created and what it may sound like. This is the focus of our research team. Audio description is a track of narration that describes important visual elements of visual media, including television shows, movies and live performances. Synthetic voices and quick, automated visual descriptions might result in more audio description on our screens. But will users lose out in other ways? AI as people's eyes AI-powered accessibility tools are proliferating. Among them is Microsoft's Seeing AI, an app that turns your smartphone into a talking camera by reading text and identifying objects. The app Be My AI uses virtual assistants to describe photos taken by blind users; it's an AI version of the original app Be My Eyes, where the same task was done by human volunteers. There are increasingly more AI software options for text-to-speech and document reading, as well as for producing audio description. Audio description is an essential feature to make visual media accessible to blind or vision impaired audiences. But its benefits go beyond that. Increasingly, research shows audio description benefits other disability groups and mainstream audiences without disability. Audio description can also be a creative way to further develop or enhance a visual text. Traditionally, audio description has been created using human voices, script writers and production teams. However, in the last year several international streaming services including Netflix and Amazon Prime have begun offering audio description that's at least partially generated with AI. Yet there are a number of issues with the current AI technologies, including their ability to generate false information. These tools need to be critically appraised and improved. Is AI coming for audio description jobs? There are multiple ways in which AI might impact the creation - and end result - of audio description. With AI tools, streaming services can get synthetic voices to "read" an audio description script. There's potential for various levels of automation, while giving users the chance to customise audio description to suit their specific needs and preferences. Want your cooking show to be narrated in a British accent? With AI, you could change that with the press of a button. However, in the audio description industry many are worried AI could undermine the quality, creativity and professionalism humans bring to the equation. The language-learning app Duolingo, for example, recently announced it was moving forward with "AI first" development. As a result, many contractors lost jobs that can now purportedly be done by algorithms. On the one hand, AI could help broaden the range of audio descriptions available for a range of media and live experiences. But AI audio description may also cost jobs rather than create them. The worst outcome would be a huge amount of lower-quality audio description, which would undermine the value of creating it at all. Can we trust AI to describe things well? Industry impact and the technical details of how AI can be used in audio description are one thing. What's currently lacking is research that centres the perspectives of users and takes into consideration their experiences and needs for future audio description. Accuracy - and trust in this accuracy - is vitally important for blind and low-vision audiences. Cheap and often free, AI tools are now widely used to summarise, transcribe and translate. But it's a well-known problem that generative AI struggles to stay factual. Known as "hallucinations", these plausible fabrications proliferate even when the AI tools are not asked to create anything new - like doing a simple audio transcription. If AI tools simply fabricate content rather than make existing material accessible, it would even further distance and disadvantage blind and low-vision consumers. We can use AI for accessibility - with care AI is a relatively new technology, and for it to be a true benefit in terms of accessibility, its accuracy and reliability need to be absolute. Blind and low-vision users need to be able to turn on AI tools with confidence. In the current "AI rush" to make audio description cheaper, quicker and more available, it's vital that the people who need it the most are closely involved in how the tech is deployed.
[2]
AI is now used for audio description. But it should be accurate and actually useful for people with low vision
Since the recent explosion of widely available generative artificial intelligence (AI), it now seems that a new AI tool emerges every week. With varying success, AI offers solutions for productivity, creativity, research, and also accessibility: making products, services and other content more usable for people with disability. The award-winning 2024 Super Bowl ad for Google Pixel 8 is a poignant example of how the latest AI tech can intersect with disability. Directed by blind director Adam Morse, it showcases an AI-powered feature that uses audio cues, haptic feedback (where vibrating sensations communicate information to the user) and animations to assist blind and low-vision users in capturing photos and videos. The ad was applauded for being disability inclusive and representative. It also demonstrated a growing capacity for -- and interest in -- AI to generate more accessible technology. AI is also poised to challenge how audio description is created and what it may sound like. This is the focus of our research team. Audio description is a track of narration that describes important visual elements of visual media, including television shows, movies and live performances. Synthetic voices and quick, automated visual descriptions might result in more audio description on our screens. But will users lose out in other ways? AI as people's eyes AI-powered accessibility tools are proliferating. Among them is Microsoft's Seeing AI, an app that turns your smartphone into a talking camera by reading text and identifying objects. The app Be My AI uses virtual assistants to describe photos taken by blind users; it's an AI version of the original app Be My Eyes, where the same task was done by human volunteers. There are increasingly more AI software options for text-to-speech and document reading, as well as for producing audio description. Audio description is an essential feature to make visual media accessible to blind or vision impaired audiences. But its benefits go beyond that. Increasingly, research shows audio description benefits other disability groups and mainstream audiences without disability. Audio description can also be a creative way to further develop or enhance a visual text. Traditionally, audio description has been created using human voices, script writers and production teams. However, in the last year, several international streaming services including Netflix and Amazon Prime have begun offering audio description that are at least partially generated with AI. Yet there are a number of issues with the current AI technologies, including their ability to generate false information. These tools need to be critically appraised and improved. Is AI coming for audio description jobs? There are multiple ways in which AI might impact the creation -- and end result -- of audio description. With AI tools, streaming services can get synthetic voices to "read" an audio description script. There's potential for various levels of automation, while giving users the chance to customize audio description to suit their specific needs and preferences. Want your cooking show to be narrated in a British accent? With AI, you could change that with the press of a button. However, in the audio description industry, many are worried AI could undermine the quality, creativity and professionalism humans bring to the equation. The language-learning app Duolingo, for example, recently announced it was moving forward with "AI first" development. As a result, many contractors lost jobs that can now purportedly be done by algorithms. On the one hand, AI could help broaden the range of audio descriptions available for a range of media and live experiences. But AI audio description may also cost jobs rather than create them. The worst outcome would be a huge amount of lower-quality audio description, which would undermine the value of creating it at all. Can we trust AI to describe things well? Industry impact and the technical details of how AI can be used in audio description are one thing. What's currently lacking is research that centers the perspectives of users and takes into consideration their experiences and needs for future audio description. Accuracy -- and trust in this accuracy -- is vitally important for blind and low-vision audiences. Cheap and often free, AI tools are now widely used to summarize, transcribe and translate. But it's a well-known problem that generative AI struggles to stay factual. Known as "hallucinations," these plausible fabrications proliferate even when the AI tools are not asked to create anything new -- like doing a simple audio transcription. If AI tools simply fabricate content rather than make existing material accessible, it would even further distance and disadvantage blind and low-vision consumers. We can use AI for accessibility -- with care AI is a relatively new technology, and for it to be a true benefit in terms of accessibility, its accuracy and reliability need to be absolute. Blind and low-vision users need to be able to turn on AI tools with confidence. In the current "AI rush" to make audio description cheaper, quicker and more available, it's vital that the people who need it the most are closely involved in how the tech is deployed.
Share
Copy Link
AI is revolutionizing audio description for visual media, but concerns about accuracy and job displacement are emerging. The technology offers potential benefits but requires careful implementation to truly serve visually impaired users.
Artificial Intelligence (AI) is making significant strides in the field of accessibility, particularly in audio description for visual media. This technology is poised to transform how audio descriptions are created and delivered, potentially making visual content more accessible to blind and low-vision audiences 1.
A notable example of AI's intersection with disability accessibility was showcased in the 2024 Super Bowl ad for Google Pixel 8. Directed by blind director Adam Morse, the ad highlighted an AI-powered feature that uses audio cues, haptic feedback, and animations to assist visually impaired users in capturing photos and videos 12.
Source: Phys.org
The proliferation of AI-powered accessibility tools is evident in various applications:
While AI offers promising solutions for accessibility, it also raises concerns within the industry:
A critical concern in the implementation of AI in audio description is the accuracy of the generated content:
As AI continues to develop in this field, experts emphasize the importance of user-centered research:
While AI presents exciting possibilities for improving accessibility, its implementation requires careful consideration:
As the "AI rush" continues to push for cheaper, quicker, and more widely available audio descriptions, it is essential that the technology is developed and deployed with the needs and experiences of visually impaired users at the forefront.
Independent publishers file an antitrust complaint against Google in the EU, alleging that the company's AI Overviews feature harms publishers by misusing web content and causing traffic and revenue loss.
8 Sources
Policy and Regulation
1 day ago
8 Sources
Policy and Regulation
1 day ago
An Xbox executive's suggestion to use AI chatbots for emotional support after layoffs backfires, highlighting tensions between AI adoption and job security in the tech industry.
7 Sources
Technology
1 day ago
7 Sources
Technology
1 day ago
The Model Context Protocol (MCP) is emerging as a game-changing framework for AI integration, offering a standardized approach to connect AI agents with external tools and services. This innovation promises to streamline development processes and enhance AI capabilities across various industries.
2 Sources
Technology
34 mins ago
2 Sources
Technology
34 mins ago
A new study reveals that advanced AI language models, including ChatGPT and Llama, are increasingly prone to oversimplifying complex scientific findings, potentially leading to misinterpretation and misinformation in critical fields like healthcare and scientific research.
2 Sources
Science and Research
33 mins ago
2 Sources
Science and Research
33 mins ago
The US government is planning new export rules to limit the sale of advanced AI GPUs to Malaysia and Thailand, aiming to prevent their re-export to China and close potential trade loopholes.
3 Sources
Policy and Regulation
16 hrs ago
3 Sources
Policy and Regulation
16 hrs ago