wifi peoples movements

How AI Wi-Fi technology could peer inside our homes

Image credit: Carnegie Mellon University

Whether it is Alexa listening to our every move, home CCTV cameras watching over our homes, or our phones detecting every step we take, we’re used to our gadgets monitoring our lives. And now one highly experimental piece of technology looks set to take it even further – and could even become capable of watching us through walls.

As part of a wider pose-estimation programme called DensePose, Carnegie Mellon University engineers take advantage of the humble Wi-Fi chips found in, well, almost everything, and use them to figure out where in the room we are standing – and even the position or pose that we are making.

Their system does this not by using any special sensors, but by taking the raw Wi-Fi signal data and applying some clever machine learning. And the researchers have now published their experiments in a paper, which shows the DensePose software accurately drawing a wireframe mesh identifying test subjects – and even correctly predicting the direction they are facing, and the position of their arms and legs.

Research professor Fernando De La Torre, alongside his colleagues Jiaqi Geng and Dong Huang, believe that such technology could be particularly useful for taking care of elderly people living at home.

“My parents live in Spain, and they are getting older,” says De La Torre, “I talk to them every day, but I really would like to have a system, not relying on cameras, that can monitor their wellbeing. So, if something happens, it just alerts me. If they fall, can we detect falls?”


But this is just scratching the surface of the technology’s potential. “There are companies that are already using different types of radio frequencies to monitor people, especially in hospital settings,” explains De La Torre.

In his team’s paper, they cite the emergence of millimetre wave (mmWave) as an existing technology that can, in principle, be used in a similar way. But it faces two significant drawbacks. First, mmWave signals cannot penetrate objects due to the high frequency range, making it less useful. And second, there’s the more pragmatic drawback – it’s much more expensive than standard Wi-Fi equipment.

“Everything is going to depend on scale. Routers are everywhere. We have mass-​produced this technology, so presumably it is going to be accessible to more people and cheaper just because we use it for other things,” says De La Torre.

To conduct their research, the engineers built a simple set-up of two computers on either side of a room, each with three separate Wi-Fi antennas. They then walked between the two and were able to analyse the ‘channel state information’ (CSI) to see how the presence of humans interfered with the signals.

But turning interference into people isn’t easy. As they note in their paper: “The CSIs are complex decimal sequences that do not have spatial correspondence to spatial locations, such as the image pixels.” Similarly, traditional signal-analysis techniques like measuring the ‘time of flight’ between transmitter and receiver, or the angle by which the signal has arrived, are not accurate enough to detect poses – they can only achieve around 0.5m of accuracy, because of ‘random phase shift’ and interference from other Wi-Fi and electronic devices.

That’s why they turned to deep-learning technology for the analysis, to see if it could spot patterns and commonalities amid all of the noise.

“We have what is called a supervised process,” says De La Torre, who describes how the algorithm was trained by feeding it both the data from the Wi-Fi and data from a camera that had been filming subjects walking around. The algorithm then used the captured amplitude and phase data from the Wi-Fi antennas to identify subjects and their positions, and then attempt to estimate their current poses. It subsequently checked its > < work against the video to see if it got the ‘correct’ answer – so it could gradually improve its guesses over time.

“The interesting thing is once the network has learned to extract where the body is from the amplitude and signal reflections, what it is able to generalise,” says De La Torre. “You can try different people, with different clothing, doing different activities and the network [will still] detect from the reflections of these Wi-Fi signals.”

So given that the algorithm can generalise enough to work with different types of people and clothing, could the technology soon be finding its way into homes and workplaces?

“It’s a proof of concept, which is very nice that we are able to achieve this kind of accuracy in the estimation and location of body parts from Wi-Fi, but still, there are many problems before we can use this technology for the regular Wi-Fi that we have at home,” says De La Torre.

Though the technology shows huge signs of promise, it is still very early days, and the technology challenge to turn the proof of concept into something useful to our lives is still significant.

“The Wi-Fi signal is very hard to model mathematically,” says De La Torre. “It depends on the temperature, the humidity, or any interference. If I have a phone [emitting electromagnetic signals], it’s going to be distorted. If I change my furniture, the Wi-Fi signal is going to change. So in real operational scenarios, when there are changes it’s going to be a difficult problem to solve.”


Even in terms of medical uses – the team’s go-to idea of where it could play a role – the researchers believe that they need to figure out how to capture position and pose data at a greater level of detail to be useful.

“Imagine in the future you want to monitor the wellbeing of elderly people at home, and you want to measure psychomotor retardation or agitation,” says De La Torre, speaking of two common conditions that the tracking technology could look out for. “Things like this are going to be very hard. Maybe not impossible but very difficult.”

But there are already other applications where the technology could come in useful. “Simpler tasks like detecting if there is someone in a room, I think is very feasible,” says De La Torre. “You can think of your Wi-Fi as a surveillance system. You leave your house and then the Wi-Fi monitors if somebody enters or not.”

Similarly, it’s easy to imagine if the technology is commercialised that perhaps one day in the future, smart home devices like Amazon Alexa and Apple’s HomePod could use the technology to geolocate people around the home, making it possible to say ‘Turn on the lights’ and to have the assistant know which room you’re referring to.

However, getting there will not be without challenges. For example, there’s the question of privacy. Do we really want our Wi-Fi devices watching us, even looking through walls, and figuring out what we’re up to?

De La Torre isn’t so worried. “It’s much more privacy preserving than having cameras,” he says, contrasting it with what the likes of Google know about us already. “Knowing everything that I search for or knowing everything that I’m curious about, that reveals much more about me. I think this is much more dangerous than knowing the particular thing you might be doing at home.”

In fact, the team believes that given the trade-off, DensePose could be good for privacy. In the paper, they argue that “Wi-Fi signals can serve as a ubiquitous substitute for RGB images for human sensing in certain instances”.

So perhaps one day, DeepPose-inspired technology could be everywhere. But to get there, some technical leaps are still required.

“We need to improve robustness,” says De La Torre. And realistically, this means improving the data-processing model. This is where the researchers have a significant challenge ahead.

“Deep-learning models typically are as good as the data that they have been trained on,” says De La Torre. “The main limitation on research is access to raw Wi-Fi signals. There are not that many databases out there that are publicly available for people to work on. It’s not trivial to collect this data.”

He compares their data situation unfavourably to the likes of generative-AI sensations ChatGPT and Dall-E, which have been trained on millions of words and images gathered from across the internet. Sadly, DeepPose doesn’t have an equivalent it can train on. But he does hope it will change soon.

“Industry is interested in this topic,” says De La Torre. “When industry is interested, usually they put money into work on this area.”

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles