2 Sources
[1]
Reading signs: New method improves AI translation of sign language
Sign languages have been developed by nations around the world to fit the local communication style, and each language consists of thousands of signs. This has made sign languages difficult to learn and understand. Using artificial intelligence to automatically translate the signs into words, known as word-level sign language recognition, has now gained a boost in accuracy through the work of an Osaka Metropolitan University-led research group. Previous research methods have been focused on capturing information about the signer's general movements. The problems in accuracy have stemmed from the different meanings that could arise based on the subtle differences in hand shape and relationship in the position of the hands and the body. Graduate School of Informatics Associate Professor Katsufumi Inoue and Associate Professor Masakazu Iwamura worked with colleagues including at the Indian Institute of Technology Roorkee to improve AI recognition accuracy. They added data such as hand and facial expressions, as well as skeletal information on the position of the hands relative to the body, to the information on the general movements of the signer's upper body. "We were able to improve the accuracy of word-level sign language recognition by 10-15% compared to conventional methods," Professor Inoue declared. "In addition, we expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries."
[2]
Reading signs: New method improves AI translation of sign language
Sign languages have been developed by nations around the world to fit the local communication style, and each language consists of thousands of signs. This has made sign languages difficult to learn and understand. Using artificial intelligence to automatically translate the signs into words, known as word-level sign language recognition, has now gained a boost in accuracy through the work of an Osaka Metropolitan University-led research group. The findings were published in IEEE Access. Previous research methods have been focused on capturing information about the signer's general movements. The problems in accuracy have stemmed from the different meanings that could arise based on the subtle differences in hand shape and relationship in the position of the hands and the body. Graduate School of Informatics Associate Professor Katsufumi Inoue and Associate Professor Masakazu Iwamura worked with colleagues at the Indian Institute of Technology Roorkee, to improve AI recognition accuracy. They added data such as hand and facial expressions, as well as skeletal information on the position of the hands relative to the body, to the information on the general movements of the signer's upper body. "We were able to improve the accuracy of word-level sign language recognition by 10-15% compared to conventional methods," Professor Inoue said. "In addition, we expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries."
Share
Copy Link
Researchers from Osaka Metropolitan University and Indian Institute of Technology Roorkee have developed a new AI method that improves the accuracy of sign language translation by 10-15%, potentially revolutionizing communication for the deaf and hard of hearing community worldwide.
Researchers from Osaka Metropolitan University and the Indian Institute of Technology Roorkee have made a significant advancement in artificial intelligence (AI) technology for sign language translation. This breakthrough promises to enhance communication for deaf and hard of hearing individuals across the globe 12.
Sign languages, developed by various nations to suit local communication styles, comprise thousands of unique signs. This complexity has historically made sign languages challenging to learn and understand, especially for those outside the deaf community. Previous attempts at using AI for word-level sign language recognition have faced accuracy issues due to the nuanced nature of sign language, where subtle differences in hand shapes and positions can significantly alter meanings 1.
The research team, led by Associate Professors Katsufumi Inoue and Masakazu Iwamura from Osaka Metropolitan University's Graduate School of Informatics, has developed a novel method to address these challenges. Their approach goes beyond capturing just the general movements of the signer's upper body, which was the focus of conventional methods 2.
This innovative method has yielded remarkable results, improving the accuracy of word-level sign language recognition by 10-15% compared to traditional approaches. Professor Inoue expressed optimism about the potential applications of this technology, stating, "We expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries" 1.
The universality of this method is particularly noteworthy. Its potential applicability to various sign languages worldwide could significantly enhance accessibility and communication for deaf and hard of hearing communities globally. This breakthrough could pave the way for more inclusive technologies and bridge communication gaps in diverse settings, from educational institutions to public services 2.
The team's groundbreaking findings have been published in IEEE Access, a prestigious peer-reviewed scientific journal, underscoring the significance of this research in the field of AI and accessibility technology 2.
Summarized by
Navi
Leading AI researchers from major tech companies have jointly published a paper urging for more research into chain-of-thought (CoT) monitoring, a crucial method for understanding AI reasoning that may become impossible as AI systems advance.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago
Google's AI agent 'Big Sleep' has made history by detecting and preventing a critical vulnerability in SQLite before it could be exploited, showcasing the potential of AI in proactive cybersecurity.
4 Sources
Technology
19 hrs ago
4 Sources
Technology
19 hrs ago
Microsoft is rolling out an update to Copilot Vision AI for Windows Insiders, allowing it to analyze and interact with the entire desktop, enhancing its ability to provide real-time assistance and insights.
9 Sources
Technology
19 hrs ago
9 Sources
Technology
19 hrs ago
Google announces major advancements in AI-driven cybersecurity, including Big Sleep's discovery of critical vulnerabilities and the expansion of AI capabilities in forensic tools, ahead of major security conferences.
3 Sources
Technology
19 hrs ago
3 Sources
Technology
19 hrs ago
Meta addressed a significant security vulnerability in its AI chatbot that could have exposed users' private prompts and AI-generated responses. The bug, discovered by a security researcher, was fixed and resulted in a $10,000 bug bounty reward.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago