Speakers and microphones built into every smartphone can be turned into a sonar system that allows users to remotely control their devices by turning virtually any nearby surface into a touch screen.
In a video demonstration, the team from the University of Washington, the USA, demonstrated how the system in real time monitors the movement of a user’s finger, which is drawing on a paper placed next to the phone.
The system would be in particularly useful for smartwatches and other wearable devices that have very small screens, which make direct interaction sometimes too difficult for users.
"You can't type very easily onto a smartwatch display, so we wanted to transform a desk or any area around a device into an input surface," said Rajalakshmi Nandakumar, a doctoral student in computer science and engineering at the University of Washington, who led the project. "I don't need to instrument my fingers with any other sensors - I just use my finger to write something on a desk or any other surface and the device can track it with high resolution."
Dubbed FingerIO, the system was funded by Google and the US National Science Foundation.
Unlike other gesture control systems, the technology relies on sound waves emitted by the device's speakers. These soundwaves are inaudible but offer several advantages over camera-based gesture control. For example, the gesture doesn’t need to be performed in the direct line of sight and as the sound wave can travel through fabric, the user can control the phone even if it is still in his or her pocket. The sound wave is reflected by the signalling finger and travels back to the phone, where it is detected by the device’s microphones. The researchers say FingerIO has its advantages also over a radar-based system as it doesn’t require any additional sensors.
"Acoustic signals are great - because sound waves travel much slower than the radio waves used in radar, you don't need as much processing bandwidth so everything is simpler," said Shyam Gollakota, assistant professor of computer science and engineering at the University of Washington and senior author of the study.
“From a cost perspective, almost every device has a speaker and microphones so you can achieve this without any special hardware."
However, sound waves do have their limitations. Sonar echoes are usually quite weak and not accurate enough to provide high resolution to reliably differentiate between individual letters or subtle hand gestures.
The Washington team managed to overcome this problem by using a type of signal called Orthogonal Frequency Division Multiplexing, which is usually employed in wireless communications.
In an experiment, they have demonstrated accurate tracking of finger movements to within 8mm – enough to interact with today’s mobile devices.
The system uses algorithms that track phase changes in the echoes and correct errors in the finger location to achieve sub-centimetre accuracy.
"Given that your finger is already a centimetre thick, that's sufficient to accurately interact with the devices," said electrical engineering graduate student Vikram Iyer.
The system allows the user to turn up and down the volume, press a button or scroll through menus without actually touching the phone’s screen. It is even possible to write search commands or text in the air rather than typing on a tiny screen.
The team has created a prototype Android app that works with Samsung Galaxy S4 smartphones and a smartwatches customised with two microphones.
Next steps for the research team include demonstrating how FingerIO can be used to track multiple fingers moving at the same time, and extending its tracking abilities into three dimensions by adding additional microphones to the devices.