Sign language interpreter at concert

Machine learning app allows Alexa to respond to American Sign Language

Image credit: Dreamstime

A developer has created an app which could help people with hearing and speech impairments communicate with smart speakers.

While smart speakers such as Amazon’s Echo and Google’s Google Home have been enthusiastically adopted in the past few years, these speakers are not well suited to everyone. In particular, people with hearing and speech impairments are likely to become frustrated at the communication barrier with these devices.

Smart speakers use natural language processing to make sense of the series of phonemes it picks up as its user issues commands. Although there has been rapid process in this field, smart speakers still struggle to parse commands that are mumbled, thickly accented or unconventionally intoned.

In an attempt to render smart speakers more accessible, developer and inventor Abhishek Singh has created an app capable of helping Alexa respond to sign language. He presented the project in a video uploaded to his YouTube channel.

“If voice is the future of computing, what about those who cannot [speak]?” Singh signs in the introduction of his video.

Singh’s system requires a camera to be connected to the smart speaker, which records slow hand gestures performed against a plain background. These gestures – corresponding to American Sign Language – are translated into text, then vocalised with text-to-speech software on a laptop. Alexa is then able to respond with text and speech.

Singh used the open source TensorFlow.js deep-learning software to train his program to recognise American Sign Language.

At present, the system requires use of a camera and a laptop attached to the smart speaker, although Singh believes that in the future, home devices could be designed to be more accessible, featuring cameras and screens for alternative means of communication. Smart video devices were on show at CES 2018 - Google unveiled forthcoming Assistant-based smart displays from JBL, Lenovo, LG and Sony - suggesting that this is a direction in which the consumer technology industry is already moving.

While there are some approaches available which render devices such as computers and smartphones more accessible, attempts to teach computers to understand sign language have been limited, in part due to the complex combination of facial expression and body and hand movements that is used to convey meaning.  A Microsoft-led project to use a motion-sensing camera to translate sign language into text was killed off after the camera was discontinued in 2017.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close