Wearable transcribing device

Wearable device enables control over computer with thoughts

Image credit: Lorrie Lejeune/MIT

Researchers at Massachusetts Institute of Technology have developed a system capable of transcribing words without the user speaking them aloud.

According to the researchers, a silent device like this could allow wearers to interact with a computer with minimal disruption and diverted attention. For instance, a user could search the internet for the definition of a word they hear in a lecture without looking at their device.

“The motivation for this was to build an [intelligence augmentation] device,” said Arnav Kapur, who led the project at the MIT Media Lab.

“Our idea was: could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

The silent computing system is comprised of a pair of bone conduction headphones which transmit vibrations through the bones to the inner ear, and electrodes placed on the face. The electrodes detect tiny neuromuscular signals which are triggered by internal verbalisation. These signals are then converted into words.

The researchers initially carried out tests to determine where on the face they could identify the most useful neuromuscular signals. They found that signals from seven locations – later reduced to four – were reliable enough to be able to distinguish internally vocalised, or ‘subvocalised’, words.

Next, they collected data on some simple computational tasks with limited vocabularies, such as basic arithmetic or a game of chess. For each of these tasks, they trained a neural network to associate certain neuromuscular signals with words. According to the researchers, these neural networks could be quickly refined to understand a specific user.

The MIT group tested their system with 10 participants, who each used the system for 90 minutes to control a computer. In one of these tests, participants used the system to report opponents’ chess moves, and in turn receive silent recommendations from a computer. The system worked with an overall transcription accuracy of 92 per cent.

According to Kapur, the system’s accuracy could be improved with more data with which to train the neural network. Already, Kapur and his colleagues are collecting data using more complex conversations in order to expand the system’s vocabulary.

“We’re in the middle of collecting data, and the results look nice,” said Kapur. “I think we’ll achieve full conversation someday.”

The researchers suggest that a future iteration of the device could be used to control machinery or air traffic in noisy surroundings, as well as in special ops for communication in silent environments, and to enable synthesised speech in people with disabilities that prevent normal vocalisation.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close