2 Sources
[1]
Reading signs: New method improves AI translation of sign language
Sign languages have been developed by nations around the world to fit the local communication style, and each language consists of thousands of signs. This has made sign languages difficult to learn and understand. Using artificial intelligence to automatically translate the signs into words, known as word-level sign language recognition, has now gained a boost in accuracy through the work of an Osaka Metropolitan University-led research group. Previous research methods have been focused on capturing information about the signer's general movements. The problems in accuracy have stemmed from the different meanings that could arise based on the subtle differences in hand shape and relationship in the position of the hands and the body. Graduate School of Informatics Associate Professor Katsufumi Inoue and Associate Professor Masakazu Iwamura worked with colleagues including at the Indian Institute of Technology Roorkee to improve AI recognition accuracy. They added data such as hand and facial expressions, as well as skeletal information on the position of the hands relative to the body, to the information on the general movements of the signer's upper body. "We were able to improve the accuracy of word-level sign language recognition by 10-15% compared to conventional methods," Professor Inoue declared. "In addition, we expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries."
[2]
Reading signs: New method improves AI translation of sign language
Sign languages have been developed by nations around the world to fit the local communication style, and each language consists of thousands of signs. This has made sign languages difficult to learn and understand. Using artificial intelligence to automatically translate the signs into words, known as word-level sign language recognition, has now gained a boost in accuracy through the work of an Osaka Metropolitan University-led research group. The findings were published in IEEE Access. Previous research methods have been focused on capturing information about the signer's general movements. The problems in accuracy have stemmed from the different meanings that could arise based on the subtle differences in hand shape and relationship in the position of the hands and the body. Graduate School of Informatics Associate Professor Katsufumi Inoue and Associate Professor Masakazu Iwamura worked with colleagues at the Indian Institute of Technology Roorkee, to improve AI recognition accuracy. They added data such as hand and facial expressions, as well as skeletal information on the position of the hands relative to the body, to the information on the general movements of the signer's upper body. "We were able to improve the accuracy of word-level sign language recognition by 10-15% compared to conventional methods," Professor Inoue said. "In addition, we expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries."
Share
Copy Link
Researchers from Osaka Metropolitan University and Indian Institute of Technology Roorkee have developed a new AI method that improves the accuracy of sign language translation by 10-15%, potentially revolutionizing communication for the deaf and hard of hearing community worldwide.
Researchers from Osaka Metropolitan University and the Indian Institute of Technology Roorkee have made a significant advancement in artificial intelligence (AI) technology for sign language translation. This breakthrough promises to enhance communication for deaf and hard of hearing individuals across the globe 12.
Sign languages, developed by various nations to suit local communication styles, comprise thousands of unique signs. This complexity has historically made sign languages challenging to learn and understand, especially for those outside the deaf community. Previous attempts at using AI for word-level sign language recognition have faced accuracy issues due to the nuanced nature of sign language, where subtle differences in hand shapes and positions can significantly alter meanings 1.
The research team, led by Associate Professors Katsufumi Inoue and Masakazu Iwamura from Osaka Metropolitan University's Graduate School of Informatics, has developed a novel method to address these challenges. Their approach goes beyond capturing just the general movements of the signer's upper body, which was the focus of conventional methods 2.
This innovative method has yielded remarkable results, improving the accuracy of word-level sign language recognition by 10-15% compared to traditional approaches. Professor Inoue expressed optimism about the potential applications of this technology, stating, "We expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries" 1.
The universality of this method is particularly noteworthy. Its potential applicability to various sign languages worldwide could significantly enhance accessibility and communication for deaf and hard of hearing communities globally. This breakthrough could pave the way for more inclusive technologies and bridge communication gaps in diverse settings, from educational institutions to public services 2.
The team's groundbreaking findings have been published in IEEE Access, a prestigious peer-reviewed scientific journal, underscoring the significance of this research in the field of AI and accessibility technology 2.
Summarized by
Navi
Google launches its new Pixel 10 smartphone series, showcasing advanced AI capabilities powered by Gemini, aiming to challenge competitors in the premium handset market.
20 Sources
Technology
4 hrs ago
20 Sources
Technology
4 hrs ago
Google's Pixel 10 series introduces groundbreaking AI features, including Magic Cue, Camera Coach, and Voice Translate, powered by the new Tensor G5 chip and Gemini Nano model.
12 Sources
Technology
5 hrs ago
12 Sources
Technology
5 hrs ago
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather with improved accuracy, potentially helping to protect Earth's infrastructure from solar storm damage.
6 Sources
Technology
13 hrs ago
6 Sources
Technology
13 hrs ago
Google's latest smartwatch, the Pixel Watch 4, introduces significant upgrades including a curved display, enhanced AI features, and improved health tracking capabilities.
17 Sources
Technology
4 hrs ago
17 Sources
Technology
4 hrs ago
FieldAI, a robotics startup, has raised $405 million to develop "foundational embodied AI models" for various robot types. The company's innovative approach integrates physics principles into AI, enabling safer and more adaptable robot operations across diverse environments.
7 Sources
Technology
5 hrs ago
7 Sources
Technology
5 hrs ago