With pervasive computing having an increasing influence over our lives, embedded processors are set to become part of the very fabric of existence
A future for pervasive computing was predicted in a famous 1991 paper by the late Mark Weiser of Xerox PARC, entitled 'The Computer for the 21st Century'. Weiser insisted that "the most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it…Most of the computers that participate in embodied virtuality will be invisible in fact as well as in metaphor."
By that definition the era of pervasive or ubiquitous computing is now just two years away, according to Dan Russell, director of the User Sciences and Experience Group at IBM's Almaden Research Centre. By 2010, Russell suggested, people will have ceased to realise when they are using computers in many cases.
This is a rather unidimensional view, for there are many emerging forms of pervasiveness, some of which require the full participation and knowledge of the user. Pervasiveness, in fact, can be seen as representing an extension of conventional computing rather than a wholesale revolution. On this basis, even with present technology, pervasive computing has something to offer almost everybody, according to Bill Bodin, head of research at IBM's Pervasive Advanced Technology Laboratory in Austin, Texas."For every customer, there is something in this lab," Bodin insists.
This sheer ubiquity of pervasive computing can be seen as a potential problem, since the field is tending to coalesce in islands, with each setting its own standards; yet these islands are likely to overlap increasingly as the technologies mature. This sets the stage for some conflicts that will need to be resolved; but it also means there is healthy diversity in the field, with ideas and technologies germinating on a fertile bed, driven by a host of differing requirements.
Leaders of the pack
The leading pervasive sectors are healthcare monitoring,intelligent transport systems, the smart home, the automated laboratory, environmental monitoring, and wearable computing. Some of the most pressing demands for pervasive computing are coming from laboratories in life sciences faced with proliferating numbers of experiments generating data at unprecedented rates. The use of sensors and other components can automate some of the data gathering and assimilation and, in some cases, even the experiments themselves are beginning to be conducted by robotic devices responding and moving to changing conditions.
The point is that feedback from these different domains is accelerating innovation across the whole pervasive computing sphere. There are common challenges across all of these, in particular discontinuous communication, shortage of processing and memory resources, need for low power consumption, software distribution, security, incompatibility between components, and real-time management of an ever changing and constantly evolving device population.
All of these challenges are familiar to the mobile communications industry, where the biggest problem has been the pace of innovation, causing some phone handsets to become obsolete almost as soon as they have been deployed. This is a problem familiar also to writers of books about IT technologies in general, but particularly the mobile arena.
The emerging pervasive era of computation brings some new problems that have only begun to emerge in the current generation of mobile applications. At least most existing mobile applications adhere to the conventional software model divide between the client device and a server or Web-based component, either interacting directly or intermittently.
Yet pervasive computing shatters this cosy structure into multiple roaming fragments of code with no permanent home. Furthermore, the application may have to become 'context aware' in which case the actual code that is executed on behalf of a given user, say, may depend on the time, location, and other environmental factors. Location dependence is already a factor in some mobile applications for example, in targeting adverts to a handset alerting people to nearby places they might like to visit, possibly taking account of personal preferences.
But location is only one aspect of context, which also depends on other factors that vary over time, as well as the person's current activity and recent history. This is the greatest challenge lying in the realm of artificial intelligence, various aspects of which, such as speech recognition, have proved most intractable. The challenges at the device and communication levels should not be underestimated. At present the pervasive field can be split into real applications and trials or laboratory demonstrations. Both rely utterly on wireless communications of some form, which is the common thread linking most forms of pervasiveness.
The real applications are mostly based on commercially available mobile phone handsets or PDAs, while many of the trials involve embedded components or chips on wearable devices, environmental or scientific sensors, or appliances such as cookers or refrigerators.
Such components are commonly associated with pervasiveness, yet in the immediate future much of the running will come from mobile phone or PDA-based applications for one simple reason: just about everybody has one already.
The mobile phone also has significant processing capability and three vital components a camera, microphone, and ability to pinpoint location via global positioning system (GPS). As a result, the device can serve as a client platform for pervasive applications, as well as a communications cog in a network and as a sensor for gathering data.
It is being used in the latter role as a sensor in a number of environmental monitoring trials, including one to map noise pollution involving Cambridge University and the mobile service provider O2.The initial plan is to hand out mobiles equipped with GPRS data cards to six local people, increasing to 50, all in the city of Cambridge, and then if successful extending the scheme to many more people using their own mobiles, possibly in other locations.
The project will monitor different sources of noise pollution in real time, and feed the data back for analysis and mapping. "We need to use graphical information systems (GIS) to plot the data on a map in real time," says Cambridge University research associate Eiman Kanjo, who is working on several environmental projects. "We aim to build a real-time sound map of Cambridge."
This could be accessed by people and used to avoid bottlenecks or busy periods when travelling, Kanjo suggests.
There is also scope for integrating other sensors into mobile handsets, and in another project Kanjo and colleagues have monitored carbon monoxide levels in Cambridge.
As Kanjo points out, these projects demonstrate the great potential for mobile phones to perform environmental monitoring more efficiently and cost-effectively than dedicated fixed sensors, especially in urban areas.
Mobile phones can also be used for providing information and warnings of danger, or problems such as traffic congestion, or events such as a rally that a person may wish to avoid. Such information could be relayed by the service provider, using GPS to identify those of its subscribers within the qualifying distance.
There is the potential for using Bluetooth to relay messages locally between mobile handsets, saving on cellular bandwidth and providing an efficient mechanism for distributing relevant location-based information. Use of local very short range radio for the final leg of message distribution will help make end-to-end wireless communication scalable. Currently this means using Bluetooth, but there is a new ultra low power radio technology - WiBree - designed for emerging very small pervasive devices.
But whatever radio technology is used, the proliferation of smaller wireless-enabled devices that people wear, have embedded in their cars or home, could cause congestion at the local level, as well as over the cellular access network. This problem could be most acute within the home, where Bluetooth competes with Wi-Fi and cordless phones within the same spectrum.
This challenge is being met by several projects, including one at the Technical University, Munich. This project is in essence developing techniques that are related to mechanisms already used in some digital subscriber line (DSL) networks that also face interference problems, in that case between services running in adjacent copper wires. In the radio case the objective is to adapt power output and data rate to optimise performance of all systems together, avoiding 'selfish' behaviour by any one network or device.
Apart from the home, transportation is one of the hot areas for pervasive computing. Like the mobile phone, a car or commercial vehicle is well placed both to gather and receive information. Indeed, a project organised by Intel and the University of California Berkeley performed a similar carbon monoxide monitoring exercise to the one sponsored by O2 in Cambridge, but using taxis rather than phones, in the Ghanaian capital of Accra.
The air samplers mounted on the taxis also monitored levels of sulphur dioxide and nitrogen dioxide, these too being contained in vehicular emissions, as well as being produced by power stations and heavy industry. This produced a far more detailed and accurate map than is possible with roadside samplers, because vehicles over time reach all parts of the city's road network and give a clearer picture of variations in levels of these compounds, identifying pollution trouble spots.
Vehicles will also be able to receive information about congestion, flooding, or adverse weather from roadside transmitters, and there is the potential for enforcing speed limits through remote throttle cut-off, initially perhaps on a voluntary basis. But this requires a coherent wireless infrastructure operating alongside cellular networks, and also raises the problem of reliable communication at speed, which makes handoff between devices or cells more difficult to execute reliably.
These problems are being tackled by the IEEE in a draft amendment to the 802.11 wireless LAN standard, called 802.11p or WAVE (Wireless Access in the Vehicular Environment), scheduled for publication in April 2009. This will operate in the 5.9GHz range, and just to make life more difficult, it will also support higher data rates for vehicular Internet access and be capable of delivering video at acceptable quality of service, which is difficult at speeds in excess of 50mph.
Routes to recovery
Healthcare is the other area of major activity for pervasive computing, with trials underway and potential benefits for people in the near future. There is the potential for wearable sensors to monitor blood pressure, heart rate,and body temperature for example, and transmit the data over a mobile phone network to an automated diagnostic centre that could alert a doctor in the event of an abnormal reading.
This could be of value for people known to be at high risk of heart attacks or strokes for example. There is also the potential for immediate intervention in the face of an imminent cardiac arrest by triggering an internal defibrillator, a device to jump-start the heart by administering a small electric shock. Defibrillators have been in use for some time, but need regular visits to a hospital for reprogramming, according to Morris Sloman, professor of computing at London's Imperial College. "There is the potential for more dynamic updating, avoiding a visit to the hospital every three months," says Sloman, who is working on various health monitoring projects.
Some of the most coherent programmes for tackling the greatest challenges of pervasive computing challenges are in China, for example at the Chinese Academy of Sciences in Beijing, and Department of Computer Science and Engineering, Jiaotong University, Shanghai.
The latter is focusing on two of the biggest problems, context aware computing, and conservation of resources, especially power, but also network bandwidth, memory, and processing cycles. "For the second target, I have proposed a generic software partitioning algorithm to save different types of resources," said Songqiao Han, one of the leading researchers at the university in Shanghai. "The algorithm is based on the network flow theory and uses component migration, replication, remote invocation and rebinding to save the limited resources."
Component migration involves moving executable software to the device best placed to execute it, while replication, remote invocation and re-binding together ensure that the overall software environment runs smoothly and that tasks are executed correctly wherever this takes place. This is all related to the first objective being pursued at Shanghai of context aware computing, which Han said involved taking existing mechanisms such as client/server, and Code-On-Demand, and tying them together to ensure that the correct software components are activated at the right time on the basis of context.
But this leaves the challenge of determining what the context is on the basis of factors such as the user's actions or location. This challenge goes beyond programming mechanisms to the higher level matter of human/machine interaction. There lie the biggest challenges of all, according to A.J. Brush, a researcher in Microsoft's Visualisation and Interaction for Business and Entertainment (VIBE) group.
"Although I do not discount the technical challenges involved in ubiquitous computing, such as building novel sensors, and extracting useful information from large amounts of sensor data, I believe that many of the biggest challenges revolve around how ubiquitous computing applications will integrate into our lives," Brush says.
Brush has also worked on the societal dimension of pervasive computing under the banner of 'boundary management', which involves the increasingly blurred lines between work, rest and holiday. "Given that people have found being always available and constantly in communication with others to be addictive, evidenced by the term 'CrackBerry', I expect the availability of communication to only become more pervasive," he says.
"Going forward researchers in ubiquitous computing will need to pay careful attention to how technology helps people manage boundaries in their lives, either to eliminate or maintain them as desired."
Brush has identified a counter-trend to ever increasing availability for work, whereby people explicitly turn off their mobile phones or other pervasive devices. Pervasive applications will need to take into account people's desire not to be available, as well as providing the means for them to be so when desired, insists Brush.
Yet in a way non-availability fits in with the ethos of pervasiveness, which is all about disconnected or discontinuous computing, in which loosely coupled devices cooperate. As for whether the age of pervasive computing has arrived, it does seem that IBM's Russell was quite close to the mark in predicting that 2010 will be the year, for currently there are plenty of trials and laboratory projects, but few real-life applications where the user really is unaware that a computing device is involved.