We’ve all seen TV shows and movies make use of subtitles for the hearing impaired. However, for many deaf people, it takes more effort to decode the English words used in the subtitles than it would to understand the material if it were presented in their native tongue: sign language.
In an attempt to address this issue, the NHK Science & Technology Research Laboratories just released some interesting new technology: a system that automatically translates spoken language into sign language using an animated virtual avatar.
As the researchers who developed the system explained to Akihabara News, “Subtitles are fine for people who understand Japanese, and who lost their hearing at some point. Meanwhile, people who are deaf from birth learn sign language first, naturally they study Japanese after that, but they find that sign language is easier to understand than subtitles, so we are conducting research in sign language.”
The end goal is a system that translate words into signs accurately as they are spoken. If you think that sounds difficult, you’re right. The NHK team has made a good start, but they aren’t quite there yet. According to Akihabara News, the researchers said,“We asked a number of deaf people to watch the animations, and while they certainly could understand it word by word, they pointed out it still lacks fluency as sign language. In the future we also want to improve that level of fluency.”
When it comes to sign language (or any other language, for that matter), a human interpreter is always going to be your best bet. However, this technology certainly could help the hearing impaired understand TV broadcasts when an interpreter isn’t available.
To help improve the technology, the research team included a feature that allows viewers to comment on and correct the translations presented.
Check it out in this demonstration video from the Technology Open House 2011: