Curated by THEOUTPOST
On Thu, 16 Jan, 8:02 AM UTC
2 Sources
[1]
Reading signs: New method improves AI translation of sign language
Sign languages have been developed by nations around the world to fit the local communication style, and each language consists of thousands of signs. This has made sign languages difficult to learn and understand. Using artificial intelligence to automatically translate the signs into words, known as word-level sign language recognition, has now gained a boost in accuracy through the work of an Osaka Metropolitan University-led research group. Previous research methods have been focused on capturing information about the signer's general movements. The problems in accuracy have stemmed from the different meanings that could arise based on the subtle differences in hand shape and relationship in the position of the hands and the body. Graduate School of Informatics Associate Professor Katsufumi Inoue and Associate Professor Masakazu Iwamura worked with colleagues including at the Indian Institute of Technology Roorkee to improve AI recognition accuracy. They added data such as hand and facial expressions, as well as skeletal information on the position of the hands relative to the body, to the information on the general movements of the signer's upper body. "We were able to improve the accuracy of word-level sign language recognition by 10-15% compared to conventional methods," Professor Inoue declared. "In addition, we expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries."
[2]
Reading signs: New method improves AI translation of sign language
Sign languages have been developed by nations around the world to fit the local communication style, and each language consists of thousands of signs. This has made sign languages difficult to learn and understand. Using artificial intelligence to automatically translate the signs into words, known as word-level sign language recognition, has now gained a boost in accuracy through the work of an Osaka Metropolitan University-led research group. The findings were published in IEEE Access. Previous research methods have been focused on capturing information about the signer's general movements. The problems in accuracy have stemmed from the different meanings that could arise based on the subtle differences in hand shape and relationship in the position of the hands and the body. Graduate School of Informatics Associate Professor Katsufumi Inoue and Associate Professor Masakazu Iwamura worked with colleagues at the Indian Institute of Technology Roorkee, to improve AI recognition accuracy. They added data such as hand and facial expressions, as well as skeletal information on the position of the hands relative to the body, to the information on the general movements of the signer's upper body. "We were able to improve the accuracy of word-level sign language recognition by 10-15% compared to conventional methods," Professor Inoue said. "In addition, we expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries."
Share
Share
Copy Link
Researchers from Osaka Metropolitan University and Indian Institute of Technology Roorkee have developed a new AI method that improves the accuracy of sign language translation by 10-15%, potentially revolutionizing communication for the deaf and hard of hearing community worldwide.
Researchers from Osaka Metropolitan University and the Indian Institute of Technology Roorkee have made a significant advancement in artificial intelligence (AI) technology for sign language translation. This breakthrough promises to enhance communication for deaf and hard of hearing individuals across the globe 12.
Sign languages, developed by various nations to suit local communication styles, comprise thousands of unique signs. This complexity has historically made sign languages challenging to learn and understand, especially for those outside the deaf community. Previous attempts at using AI for word-level sign language recognition have faced accuracy issues due to the nuanced nature of sign language, where subtle differences in hand shapes and positions can significantly alter meanings 1.
The research team, led by Associate Professors Katsufumi Inoue and Masakazu Iwamura from Osaka Metropolitan University's Graduate School of Informatics, has developed a novel method to address these challenges. Their approach goes beyond capturing just the general movements of the signer's upper body, which was the focus of conventional methods 2.
This innovative method has yielded remarkable results, improving the accuracy of word-level sign language recognition by 10-15% compared to traditional approaches. Professor Inoue expressed optimism about the potential applications of this technology, stating, "We expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries" 1.
The universality of this method is particularly noteworthy. Its potential applicability to various sign languages worldwide could significantly enhance accessibility and communication for deaf and hard of hearing communities globally. This breakthrough could pave the way for more inclusive technologies and bridge communication gaps in diverse settings, from educational institutions to public services 2.
The team's groundbreaking findings have been published in IEEE Access, a prestigious peer-reviewed scientific journal, underscoring the significance of this research in the field of AI and accessibility technology 2.
Reference
Nvidia, in collaboration with the American Society for Deaf Children and Hello Monday, has introduced 'Signs', an AI-driven platform designed to teach American Sign Language (ASL) and create a comprehensive ASL dataset for future AI applications.
7 Sources
7 Sources
Researchers are developing an AI-powered Auslan avatar to translate audio announcements into sign language, aiming to improve train travel experiences for Deaf passengers in Sydney.
2 Sources
2 Sources
Researchers have made significant progress in using AI to interpret animal emotions and pain, with potential applications in animal welfare, livestock management, and conservation.
3 Sources
3 Sources
University of Michigan researchers have developed WorldScribe, an AI-powered software that provides real-time audio descriptions of surroundings for people who are blind or have low vision, potentially revolutionizing their daily experiences.
2 Sources
2 Sources
As AI translation tools become increasingly powerful, questions arise about the future of language learning. While these technologies offer convenience, they may miss crucial cultural nuances and potentially impact cognitive benefits associated with multilingualism.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved