A University at the Buffalo-led research team modified noise canceling headphones, enabling the common electronic device to “see” and translate American Sign Language (ASL) when paired with a smartphone.
Reported in the journal Proceedings of the ACM on Interactive, Mobile, Wearable & Ubiquitous Technologies, the headphone-based system uses Doppler technology, to sense tiny fluctuations or echoes, in acoustic soundwaves that are created-by the hands of someone signing.
Dubbed SonicASL, system proved 93.8 percent effective in tests performed indoors & outdoors involving 42 words. Word examples include “love”, “space”, & “camera”. Under the same conditions involving 30 simple sentences, for example, “Nice to satisfy you.”—SonicASL was 90.6 percent effective.
“SonicASL is an exciting proof-of-concept that could eventually help greatly improve communication between deaf and hearing populations,” says corresponding author Zhanpeng Jin, Ph.D., professor in Department of computing & Engineering at UB.
Before such technology is commercially available, too much work must be done, he stressed. For instance, SonicASL’s vocabulary must be greatly expanded. Also, system must be able to read facial expressions, a main component of ASL.
The study will be presented at ACM Conference on Pervasive & Ubiquitous Computing (UbiComp), taking place Sept. 21–26.
For the deaf, communication barriers persist
Worldwide, consistent with the World Federation of the Deaf, there are approximately 72 million deaf people using more than 300 different sign languages.
Although United Nations recognizes that sign languages are equal in importance to the spoken word that view isn’t yet a reality in many nations. People who are deaf or hard of hearing still experience multiple communication barriers.
Traditionally, communications between deaf American signing (ASL) users & hearing people who don’t know the language take place either in the presence of an ASL interpreter or through a camera setup.
A frequent concern over the use of cameras, consistent with Jin, includes whether those video recordings might be misused. And while the use of ASL interpreters is becoming more common, there’s no guarantee that one will be available when needed.
SonicASL aims to address these issues, especially in casual circumstances without pre-arranged planning & set-up, Jin says.
Modify headphones with speaker, add app
Most noise canceling headphones rely on an outward facing microphone that picks-up environmental noise. The headphones then produce an anti-sound, a soundwave with an same amplitude but with an inverted phase of the surrounding noise to cancel the external noise.
“We added a further speaker next to the outward-facing microphone. We wanted to ascertain if the modified headphone could sense moving objects, almost like radar,” says co-lead author Yincheng Jin, a Ph.D. candidate in Jin’s lab.
The speaker & microphone do indeed devour hand movements. The information is relayed through SonicASL cellphone app, which contains an algorithm the team created to identify the words & sentences. The app then translates the signs & speaks to the hearing person via the earphones.
“We tested SonicASL under different environments, including office, apartment, corridor and sidewalk locations,” says co-lead author Yang Gao, Ph.D., who completed the research in Jin’s lab before becoming a post-doctoral scholar at Northwestern University. “Although it’s seen a small decrease in accuracy as overall environmental noises increase, the overall accuracy remains quite good because the majority of the environmental noises don’t overlap or interfere with the frequency range required by SonicASL.”
The core SonicASL algorithm can be implemented & deployed on any smartphone, he says.
SonicASL can be adapted for other sign languages
Unlike systems, put the responsibility for “bridging” communications gap on the deaf, SonicASL flips the script, encouraging the hearing population to make the trouble.
An added benefit of SonicASL’s flexibility, it can be adapted for languages other than ASL, Jin says.
“Different sign languages have diverse features, with their own rules for pronunciation, word formation and word order,” he says. “For example, the same gesture may represent different sign language words in different countries. However, key functionality of SonicASL is to recognize various hand gestures representing words & sentences in sign languages, which are generic & universal. Although our current technology focuses on ASL, with proper training of the algorithmic model, it can-be easily adapted to other sign languages.”
The next steps, says Jin, will be expanding the sign vocabulary which will be recognized & differentiated by SonicASL also as working to incorporate the ability to read facial expressions.
“The proposed SonicASL aims to develop a user-friendly, convenient & easy-to-use headset-style system to promote & facilitate communication between the deaf & hearing populations,” says Jin.
The findings were published on ACM.