Cutting-edge on campus

The UK's universities are leading the way in many aspects of ICT R&D. In the second of a three-part series, E&T surveys innovative projects at Imperial College London's Department of Computing.

Imperial College has one of the UK's largest computing departments with 52 academic staff, 86 post-doctoral research assistants, and about 130 research students actively involved in areas as diverse as software engineering, distributed computing, artificial intelligence, high-performance computing, and visual information processing.

While computer theory remains important, the department of computing is in Imperial's engineering faculty, and consequently the research is geared towards developing practical computing technology that's relevant to the needs of commerce, industry, science, and healthcare, according to head of department Professor Jeff Magee.

"This interdisciplinary approach is challenging because there is a need to protect the core discipline as well as making the research relevant," Magee says. "On the other hand, unless there is some surrounding pressure for applications, it's a sterile research environment."

A current project that illustrates this combination of theory and societal relevance is the EPSRC-funded DISSP (Dependable Internet-Scale Stream Processing) project led by Dr Peter Pietzuch. Real-time stream data has begun to play an important role on the Internet because of the proliferation of geographically-distributed sources, such as Web feeds, sensor networks, scientific instruments, and pervasive computing environments. Millions of people could use this stream data, but they need a convenient way to process it on a global scale.

In much the same way as search engines make static Web data available, an Internet-Scale Stream Processing system collects, filters and processes stream data from potentially thousands of data sources on behalf of many users; but the problem facing all ISSP applications is that they should continue to function when network links fail and servers go down. So Pietzuch and colleagues are devising approaches that gracefully degrade quality of results in response to these failures, and provide constant feedback to users on quality of service.

"We're questioning a basic assumption of data processing - that is, when you submit a query to a system, you want a perfect answer," Pietzuch explains. "In an Internet-scale environment, you can't return a perfect answer because you don't have global time - and you don't have a consistent view of the entire system. You have to relax your query model." While the data processing is always sub-optimal, as long as the system makes intelligent assumptions about where to drop data, it doesn't matter too much to the user, he adds.

The sensors could be road vehicles, say, and the stream data their location. The question you might want to ask is: what is the most congested road in London during rush hour? "You wouldn't need location data from all the cars in London but a representative sample should give you the answer. So the system has the freedom to give a meaningful answer even if information is lacking," explains Pietzuch.

ERC starting grants

Research quality is a hot topic these days and is often judged by how well a department fares in the Research Assessment Exercises (RAE) undertaken by the Higher Education funding Council for England (HEFCE). Imperial's computing department has been awarded the top rating in each of the past exercises, and only Cambridge did better in the most recent RAE 2008 exercise.

Perhaps a more tangible measure of quality was when last year three of its young computing lecturers - Sebastian Uchitel, Maja Pantic, and Andrew Davison - were awarded prestigious European Research Council (ERC) starting grants adding up to €4.8m. The success rate for these grants was 3 per cent, so to have three in one department was an outstanding result. There were 9,167 initial applications across Europe, with only 431 passing the ERC's quality thresholds for excellence.

"We were even more pleased when we learned that we had three out of the total of four ERC awards in Computing in the UK," says Jeff Magee.

Uchitel, Pantic, and Davison's awards were for their research into three new technologies: a blueprint for complex software systems; human behaviour analysis; and a method for improving robot vision; all of which may well influence real-world IT in years to come.

Managing complex software projects

Sebastian Uchitel is creating a modelling language and tools for developing complex software systems by taking account of what might be called 'known unknowns'. Large software projects are notorious for not working as expected, and at worst failing catastrophically (and expensively). The problems often take root from the outset because the requirements are incorrect or incomplete, or they change continuously in an unmanaged way. Uchitel's solution is a new type of model, which he calls 'partial behavioural', that can be used to explore system requirements to see if they are consistent, and put forward new 'what if' scenarios to find out what the behaviour of the system might be in unforeseen situations.

Imagine that you are developing a system for airline bookings. You might first describe the requirements with use-cases, and then have a number of properties to guarantee - for example, you do not want more bookings than the number of available seats on the plane. "Then there are questions you might want to ask about contradictory information," Uchitel says. "Are some of the use-cases violating the properties? Are there behaviours I have not thought about, and should be developing new use cases for? What if an airline says that a certain flight number will be a different plane and there will be fewer seats, say?"

So Uchitel's operational models encode not only what you know the system has to do and must not do, but also explicitly what you still have not explored, in other words the 'known unknowns'. When defining these unknowns, there are a number of assumptions, as he explains: "I fix the scope of my models to any label that has been mentioned in the requirements. If we've never talked about planes, my models won't say they don't know about planes; but as soon as you talk about planes arriving or departing, I can say that at this point in the process, I don't know what will happen if a plane arrives."

Uchitel's work aims to influence industrially widespread languages such as UML state charts with these ideas. "In some ways I'm developing the underlying formal foundations for extensions of UML (unified modelling language)," he says. As part of the research he is building a tool called the Modal Transition System Analyser, which supports automated analysis, in particular of the unknowns, of the specification of a system described as a Modal Transition System. He is beginning to try small-to-medium examples or typical case studies to see if these models really are useful in this context. Uchitel is still some way from trying this in a real software project with real users or engineers, but that next step is planned.

Behaviour analysis for interfaces

Computing lecturer Maja Pantic is interested in automatically analysing non-verbal human behaviour by simultaneously capturing visual and audio signals, such as facial expression, body gesture, posture, and voice intonation. Algorithms developed from this research - which uses findings from psychology and neuroscience - may help to make computers more efficient by providing them with the ability to recognise human signals and social behaviours like turn-taking, politeness, and disagreement, says Pantic.

"If you're working with a Web browser and looking for some--thing on the menu, for example, there is a typical pattern of gaze shifting and facial expression that reveals puzzlement," Pantic adds. "It might be useful if the computer could detect this and the interface could ask what the user is searching for or proactively illuminate menu items that might be of interest to the task in hand."

New types of computer interface are one possible application. Part of the ERC grant is also for deception detection for security uses. People unconsciously display many external signs when they are lying. For instance, when we retrieve something from memory, we look up, away from the object upon which our attention had been focused. Research has shown that during the thinking phase, liars and truth-tellers both avert their gaze in this way, but liars tend to look away for less time.

"Other signs of deceptive and acted [non-spontaneous] behaviour are asymmetric facial expressions, 'leakage' of subtle facial signals and gestures that do not fit with the rest of displayed behaviour, and so on," Pantic explains, "or when the laughter doesn't have true mirth inside of it."

Several studies suggest that the way to most reliably detect what someone is thinking and feeling is to combine audio and visual behavioural clues, Pantic says: "When we laugh, for example, if the onset of the smile is very fast, then the laughter is not genuine; if the onset of the smile is slow, then it probably is genuine. So timing is crucial as is temporal correlation between signals."

Capturing these signals is complicated by the need to accurately synchronise cameras and microphones: even a delay between the audio and visual signals of one frame causes a loss in synchronicity between the lip movements and the speech. Pantic has achieved synchronisation accuracy between audio and video data streams of ±10µs, which is less than one frame at a 60 fames-per-second (fps) recording rate.

A further difficulty is that some signs and behaviours are culturally and person-dependent. In Greece, eyebrow raising is used all the time - often to gesture 'no' - but British people do not use an eyebrow raise in that way. If you go to Asia and Africa the same signal may mean something completely different.

So, instead of judging messages and meanings conveyed by the behavioural signals, Pantic's studies aim at first to just measure the signals. The next stage will be to investigate the actual messages conveyed by them including agreement, disagreement, interest, frustration, puzzlement, and fatigue - all of which are important for achieving more natural human computer interfaces.

So far Pantic has developed a series of algorithms that can analyse various behavioural cues, including facial muscle actions like frowns and smiles, body actions like walking and running, and vocal outbursts like laughter. She plans to extend this research to the automatic analysis of dynamics of human behaviour including analysing and modelling temporal co-occurrences of various behavioural cues, which is claimed by research in psychology to be crucial for untangling the meaning conveyed by displayed behaviour.

SLAM robot vision

Lecturer Andrew Davison hopes his approach to the problem of Simultaneous Localisation And Mapping (SLAM) will enable the creation of continually updated 3D maps of environments with input from just one camera. SLAM is a well-known challenge for robots and unmanned vehicles that need to build a map of their environment while keeping track of their own position; however, this is often achieved using expensive and specialised sensors. In Davison's approach (in which he is working with PhD student Gerardo Carrera), a single moving video camera can achieve much the same result by collecting many images from different positions as it travels through the world.

The computer-reconstruction of a 3D environment in this way involves tracking salient natural image features. "Whether you direct the camera at trees or a face, there are local 'features' of that image with high intensity gradients - such as where light rapidly changes to dark - and these are easy to match and follow from one image to the next," Davison explains. Every feature position is a constraint on the relative position of the camera and the fixed world: "If you have enough of those constraints, such as a large set of measurements of the same set of features from different viewpoints, you can solve for the 3D positions for all of those features, and the 3D position of the camera at all the different time steps at which it took the images."

The novel part of Davison's new work is to do this in real time at frame rates of 100-1,000Hz using algorithms that are computationally cheap. "Five years ago we could do camera tracking in real time with live cameras which operate at 30fps on a desktop PC. Now we have cameras in the lab that can capture at over 200fps with real-time capture to a laptop or desktop machine, but it is hard for real-time vision processing to keep up," says Davison.

High frame-rate means the capability to track fast movement; so Davison's technique opens up the possibilities of adding low-cost vision in applications such as robot hands, which could then pick objects up more accurately. Another use would be in flying or jumping robots employed to explore collapsed buildings. Davison is also in discussion with a company interested in making self-guided vacuum cleaners.

An alternative opportunity is to apply the efficient techniques developed at lower frame-rates, but using cheap commodity processors, within devices such as PDAs and phones. "If you wave your phone camera around in front of you, the way it moves in three dimensions might mean something. So, for example, you could use your phone camera as a 3D mouse and manipulate a 3D object on a screen," suggests Davison.

"Making algorithms that are much more efficient than the ones we have today in terms of how much processing they require will always be a benefit. You can use the computation saved to do more, like tracking faster motion, or you can decrease the specifications of the processor needed and open new low-cost applications of camera tracking," he concludes.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close