Lincoln University researchers are developing an indoor navigation app for blind based on Google's Project Tango

Smartphone 'eyes' for blind developed in Google-funded project

Smart vision systems for smartphones and tablets are being developed by researchers to allow blind and visually impaired people to better navigate in indoor environments.

The technology, developed by researchers at the University of Lincoln, relies on colour and depth sensors that are part of some modern smartphones and tablets.

The system could enable 3D mapping and localisation, navigation and object recognition and communicate with visually impaired users via vibrations, sounds or the spoken word.

“There are many visual aids already available, from guide dogs to cameras and wearable sensors,” said Nicola Bellotto, an expert on machine perception and human-centred robotics at Lincoln’s School of Computer Science and the leader of the project. “Typical problems with the latter are usability and acceptability. If people were able to use technology embedded in devices such as smartphones, it would not require them to wear extra equipment which could make them feel self-conscious.”

The project, funded through a Google Faculty Research Award, will work with Google’s experimental 3D mapping Project Tango smartphone, unveiled last year.

“There are existing smartphone apps that are able to, for example, recognise an object or speak text to describe places,” said Bellotto. “But the sensors embedded in the device are still not fully exploited. We aim to create a system with ‘human-in-the-loop’ that provides good localisation relevant to visually impaired users and, most importantly, that understands how people observe and recognise particular features of their environment.”

The system proposed by Lincoln researchers would identify visual clues in the environment captured by the smartphone camera and analyse the data to provide information about the surrounding space to the user. It could, for example, recognise, what kind of space the user finds himself in and alert him to obstacles as he moves through the room.

The smart system, developed with the help of machine learning expert Oscar Martinez Mozos and mobile robotics researcher Grzegorz Cielniak, will be able to learn from experience and adapt to the user’s needs based on the environments it encountered previously and human interaction. As a result, the longer the individual uses the system, the better the system will operate. 

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close