Curated by THEOUTPOST
On Thu, 17 Oct, 1:09 PM UTC
2 Sources
[1]
For Deaf people, train travel can be a gamble. But an AI-powered Auslan avatar can help
Queensland University of Technology provides funding as a member of The Conversation AU. For Deaf people, train travel can be a gamble. On an average day, nothing goes wrong: they catch their train to their destination and carry on with their business. But when something out of the ordinary happens, the situation can quickly get scary, because most updates are only delivered by audio announcements. A Deaf traveller may miss their train because it was moved to a different platform, or watch as their station whizzes by because the train isn't stopping there today. They may also remain on a train carriage in an emergency after everyone else has evacuated, and have to be rescued by station staff. Every single one of these examples has been drawn from the real life experiences of Deaf people in Sydney. But my colleagues and I are working with Sydney Trains and members of the Australian Deaf community to develop an advanced, artificial intelligence (AI)-powered signing avatar which can automatically translate audio announcements into Auslan. Our work on the avatar also builds towards the next step: developing AI systems which can "understand" Auslan. Journeys don't always go to plan Earlier this year, my colleagues and I ran a pilot study with three Deaf train travellers in Sydney. As well as the stories they shared about what can go wrong during train travel, we learned they use tried and tested strategies for making their journeys go smoothly. Their strategies might be familiar to regular commuters. For example, they would plan their journeys with an app, arrive early and look for signage to let them know if anything had changed. But they also said they felt they needed to stand near information screens to watch for updates, and ask station staff or other passengers for information when the situation had changed. They also reported being hypervigilant while on the train, watching to make sure they don't miss their stop. But these strategies didn't always ensure Deaf travellers received important information, including about emergencies. For example, while usually helpful, station staff were sometimes too busy to assist. The greatest frustration came in situations where other passengers weren't willing or able to provide information, leaving our Deaf travellers to just "follow the crowd". This often meant ending up in the wrong place. Developing a signing avatar Speech-to-text software might seem like an easy solution to some of these problems. But for many Deaf people, English is not their native language and Auslan can be processed far more easily and quickly. Our Deaf travellers told us that, in a perfect world, they would want live interpreters. However, automatic, AI-powered translation using a signing avatar displayed on a platform or train screen which could identify key words in an audio announcement, generate a sentence with correct Auslan grammar, and stitch together the corresponding signs from our vocabulary library was appealing for a number of reasons. First, it allows for real-time translation of announcements that use known vocabulary - which is relevant in the trains-and-stations context, where many announcements cover similar topics. Second, an avatar and its signing can be customised to the needs of a given situation, such as using information about screen location to ensure the avatar signs in the right direction while pointing out exits or other platforms. Third, multiple signers can contribute signs to an avatar's vocabulary, which can then be smoothly stitched together to make a sentence. And importantly, an avatar means no real person has to be the "face" of an organisation's automatically generated announcements. This is particularly important because the Australian Deaf community is small and close knit, and if something goes wrong with the translation, nobody suffers any reputational damage. From a technical point of view, an avatar also allows us to ensure a minimum quality threshold for signing. We're using motion capture to make sure each sign in our vocabulary library is accurate, and movements are clear. It also helps us avoid the "uncanny valley" - an effect where something human-like but subtly wrong is unsettling. We don't want any of the many-fingered monstrosities you may have seen recently generated by AI. AI for everyone This work is one step in our broader aim of creating an AI system which can understand Auslan. This AI could be used to help Deaf and hearing station staff converse, or to create "chatbot booths" or app-based assistants that would allow Deaf people to get information on demand in Auslan about their train journeys or other daily tasks. Sign languages and Deaf cultures around the world have nuances and complexities that hearing researchers and developers of AI may not be aware of. These nuances and complexities must be embedded in new technologies, and researchers and developers must take a language-first approach to AI data collection and design with - not just for - Deaf people. Only then will AI meet Deaf people's real needs: to ensure their safety and independence in every aspect of daily life.
[2]
For Deaf people, train travel can be a gamble -- AI-powered Auslan avatar can help
For Deaf people, train travel can be a gamble. On an average day, nothing goes wrong: they catch their train to their destination and carry on with their business. But when something out of the ordinary happens, the situation can quickly get scary, because most updates are only delivered by audio announcements. A Deaf traveler may miss their train because it was moved to a different platform, or watch as their station whizzes by because the train isn't stopping there today. They may also remain on a train carriage in an emergency after everyone else has evacuated, and have to be rescued by station staff. Every single one of these examples has been drawn from the real life experiences of Deaf people in Sydney. But my colleagues and I are working with Sydney Trains and members of the Australian Deaf community to develop an advanced, artificial intelligence (AI)-powered signing avatar which can automatically translate audio announcements into Auslan. Our work on the avatar also builds towards the next step: developing AI systems which can "understand" Auslan. Journeys don't always go to plan Earlier this year, my colleagues and I ran a pilot study with three Deaf train travelers in Sydney. As well as the stories they shared about what can go wrong during train travel, we learned they use tried and tested strategies for making their journeys go smoothly. Their strategies might be familiar to regular commuters. For example, they would plan their journeys with an app, arrive early and look for signage to let them know if anything had changed. But they also said they felt they needed to stand near information screens to watch for updates, and ask station staff or other passengers for information when the situation had changed. They also reported being hypervigilant while on the train, watching to make sure they don't miss their stop. But these strategies didn't always ensure Deaf travelers received important information, including about emergencies. For example, while usually helpful, station staff were sometimes too busy to assist. The greatest frustration came in situations where other passengers weren't willing or able to provide information, leaving our Deaf travelers to just "follow the crowd". This often meant ending up in the wrong place. Developing a signing avatar Speech-to-text software might seem like an easy solution to some of these problems. But for many Deaf people, English is not their native language and Auslan can be processed far more easily and quickly. Our Deaf travelers told us that, in a perfect world, they would want live interpreters. However, automatic, AI-powered translation using a signing avatar displayed on a platform or train screen which could identify key words in an audio announcement, generate a sentence with correct Auslan grammar, and stitch together the corresponding signs from our vocabulary library was appealing for a number of reasons. First, it allows for real-time translation of announcements that use known vocabulary -- which is relevant in the trains-and-stations context, where many announcements cover similar topics. Second, an avatar and its signing can be customized to the needs of a given situation, such as using information about screen location to ensure the avatar signs in the right direction while pointing out exits or other platforms. Third, multiple signers can contribute signs to an avatar's vocabulary, which can then be smoothly stitched together to make a sentence. And importantly, an avatar means no real person has to be the "face" of an organization's automatically generated announcements. This is particularly important because the Australian Deaf community is small and close knit, and if something goes wrong with the translation, nobody suffers any reputational damage. From a technical point of view, an avatar also allows us to ensure a minimum quality threshold for signing. We're using motion capture to make sure each sign in our vocabulary library is accurate, and movements are clear. It also helps us avoid the "uncanny valley" -- an effect where something human-like but subtly wrong is unsettling. We don't want any of the many-fingered monstrosities you may have seen recently generated by AI. AI for everyone This work is one step in our broader aim of creating an AI system which can understand Auslan. This AI could be used to help Deaf and hearing station staff converse, or to create "chatbot booths" or app-based assistants that would allow Deaf people to get information on demand in Auslan about their train journeys or other daily tasks. Sign languages and Deaf cultures around the world have nuances and complexities that hearing researchers and developers of AI may not be aware of. These nuances and complexities must be embedded in new technologies, and researchers and developers must take a language-first approach to AI data collection and design with -- not just for -- Deaf people. Only then will AI meet Deaf people's real needs: to ensure their safety and independence in every aspect of daily life.
Share
Share
Copy Link
Researchers are developing an AI-powered Auslan avatar to translate audio announcements into sign language, aiming to improve train travel experiences for Deaf passengers in Sydney.
Train travel for Deaf passengers can be fraught with uncertainty and potential dangers. A recent pilot study conducted with three Deaf train travelers in Sydney revealed several critical issues 1. Deaf passengers often miss important audio announcements, leading to situations where they might board the wrong train, miss their stop, or remain unaware of emergency evacuations.
Deaf travelers have developed various strategies to navigate train travel, including:
However, these strategies are not foolproof. Station staff may be too busy to assist, and other passengers might be unwilling or unable to provide accurate information 2.
Researchers at Queensland University of Technology, in collaboration with Sydney Trains and the Australian Deaf community, are developing an advanced AI-powered signing avatar to address these challenges 1. This avatar aims to automatically translate audio announcements into Auslan (Australian Sign Language).
Key features of the AI-powered Auslan avatar include:
The development team is using motion capture technology to ensure accurate and clear sign representations in the avatar's vocabulary library. This approach helps maintain a minimum quality threshold for signing and avoids the "uncanny valley" effect often associated with AI-generated human-like representations 2.
This project is part of a larger initiative to create AI systems capable of understanding Auslan. Future applications could include:
Researchers emphasize the need for a language-first approach in AI development for sign languages. This involves:
By adhering to these principles, AI can better meet the real needs of Deaf people, ensuring their safety and independence in various aspects of daily life 1.
Reference
[1]
Researchers at Florida Atlantic University have developed an innovative AI system that translates American Sign Language (ASL) to text in real-time, achieving 98.7% accuracy and potentially transforming communication for the deaf and hard-of-hearing community.
2 Sources
2 Sources
Researchers from Osaka Metropolitan University and Indian Institute of Technology Roorkee have developed a new AI method that improves the accuracy of sign language translation by 10-15%, potentially revolutionizing communication for the deaf and hard of hearing community worldwide.
2 Sources
2 Sources
Nvidia, in collaboration with the American Society for Deaf Children and Hello Monday, has introduced 'Signs', an AI-driven platform designed to teach American Sign Language (ASL) and create a comprehensive ASL dataset for future AI applications.
7 Sources
7 Sources
Cornell University researchers have created SpellRing, an AI-powered ring that uses micro-sonar technology to translate American Sign Language fingerspelling into text, potentially revolutionizing communication for the deaf and hard-of-hearing community.
3 Sources
3 Sources
University of Michigan researchers have developed WorldScribe, an AI-powered software that provides real-time audio descriptions of surroundings for people who are blind or have low vision, potentially revolutionizing their daily experiences.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved