vol 6 issue 1

Profile: Professor Erol Gelenbe

18 January 2011
By Chris Edwards
Share |
Professor Erol Gelenbe

Professor Erol Gelenbe

Professor Erol Gelenbe accepting IET award

Professor Erol Gelenbe accepting IET award

His research into self-aware and autonomic networks explores the frontiers of probability and performance evaluation in online worlds: E&T profiles a true IT innovator.

In a laboratory at the top of the Electrical and Electronics department at Imperial College, London, a complete network of servers is demonstrating how techniques used to model neurons in the brain can help get packets of data on the Internet to their destinations faster. We're with Professor Erol Gelenbe FIET, the winner of the 2010 IET Oliver Lodge Medal for Achievement in Information Technology, when I spy a little plastic box sticking out of the back of one of the blade servers.

The box – well-camouflaged by the mix of brightly coloured cables used to connect the servers to each other – protects a custom circuit board that's being used for the next phase of research: one that aims to slash the energy used by large-scale networks by letting data packets find their own way through the switches.

Gelenbe's research in IT occupies the boundary between the virtual and the physical: 'It is one way of looking at reality,' as he describes it. 'As an example, you can think of a chemical reaction as a combination of internal processing and communication. You have communication within the molecules and between them: the internal and electrostatic forces that create the bigger structures.'

That model can be applied at different levels of abstraction, Gelenbe believes: 'You can apply it at the biological level: neural networks, for example. But also to how herds of animals forage. They all involve the internal processing of information from the environment and then making decisions,' Gelenbe says. 'It's one possible paradigm, a very computer-centric view of what's going on.'

This view, however, makes it possible to extend the same basic concepts from computer networks through to the behaviour of the brain – and possibly point to how a new generation of massively parallel machines might be designed that can tolerate, and work through, failures: 'In a computer network or a telephone network, you have switches. The switches receive calls or requests for calls, and then send these requests forward to other switches until the destination switch reaches the request for a call. Then the phone rings. You lift the receiver, and a continuous stream of information then proceeds between the two end points.

'Similarly, in a neuronal network you have clusters of neurons that act as switches. They receive requests for calls as groups of electrical spikes and they route them to specific subsystems. For example, the signals from eyes go into a specific area. They don't go into the auditory system; but that's not because they couldn't,' Gelenbe points out. 'They could, but they don't.'

Constant use keeps the same neuronal pathways open for hearing or sight, making sure the signals from those nerves make it into the right part of our brains; but, Gelenbe suggests, these pathways are plastic: 'If you lose a hand, the part of your brain that was handling that will start to get information from other parts of the brain – say, signals from your leg.'

Because there is no longer any information from the lost limb, the cells lose their original training and begin to process the information that might have originally wound up being routed to this cluster of neurons by mistake.

'There is a constant plasticity going on. You have these trains of spikes being used to open-up new pathways. The less certain pathways get used the more they get opened to other parts that want to use them. The cells are exercising competitive attention for requests for pathways,' explains Gelenbe. 'We are constantly dynamically altering the available pathways. It's very similar to a computer network or a computer where you have requests for jobs to be executed.'

Starting points

This representation of reality began with Gelenbe's first steps in research. His scientific activity started in automata theory: 'An automaton is good example of this paradigm. It receives inputs, and then it acts internally and generates outputs for the rest of the world,' he says. It was a move into teaching in computer science, however, that provided the vital link between automata and Gelenbe's far-reaching theories.

'When I started teaching at the University of Michigan at Ann Arbor, I was asked to teach about computer systems,' recalls Gelenbe. 'I saw a lot of ad hoc stuff. I thought: 'Can one build a physical theory of how things should be done?' It took about ten years to get into queuing theory. The initial push lay in the physics, the reality... But that pushed me into the mathematics, and ultimately allowed me to move into other areas. The US system is very open in the way they allow you to jump into different things.'

A chance encounter at Nasa's research laboratories in northern California provided the link between queuing in computers and neural networks: 'They flew U2s to look for Russian submarines off the coast of California. They had these enormous wind tunnels to test the aircraft. I had been hired over several summers towards the end of the 1980s to use parallel processing to replace the wind tunnels: simulate turbulent behaviour on computers. There I went to seminars on neuroscience. I discovered that the models being used did not correspond well to reality. I thought I could do something better. It became a 20-year project.

'Aside from the intuitive analogies between systems, there are aspects that are more mathematical,' Gelenbe reports, pointing to his current work that attempts to model incredibly large systems, whether networks of neurons in the brain, or the billions of computers attached to the Internet: 'The Internet is for all practical purposes infinite. You can sit down and say you have so many computers or so many billion neurons. The order of magnitude is 'this', but that's all you can say. It's useless to think of them as finite systems. The mathematics tend to be intractable.'

The answer lay in the area of mean-field theory, which makes it possible to focus on individual elements of a huge system, and so compute a 'mean field' created by close neighbours: 'You can use separability to derive properties in the large. You have a mapping of the global system into its subcomponent parts. By creating a field you can look at what they do all together,' reports Gelenbe.

'When analysing computer networks or the brain, you use techniques of that kind. For random neural networks, the work showed that mean-field analysis provides exact results. That was the power of the tool.'

To infinity (and beyond?)

One of Erol Gelenbe's most recent papers is on search in infinite environments, such as the Internet. 'If the environment is infinite, but the object you are searching for is at a finite distance, can you find it and how long will it take?' he asks. 'In an infinite environment you can go off and get lost.' The key is to have searchers who, if they fail, pay the ultimate price. 'Whoever sends out the searcher says: 'If you haven't succeeded, please commit suicide. And make sure your replacement doesn't do what you do. You can do that by randomising the search.'

Such a process of self-discovery may hold the key for more energy-efficient networks, which is why a little box is sitting on one of the servers in the aforementioned computer room of Imperial College London's Electrical and Electronics department. In large networks, top-down planning for energy efficiency becomes too complex to implement, Gelenbe argues: 'With very large systems, such as computer networks, you never know once and for all what you should be doing. The idea is to do it all adaptively and have the traffic between machines discover the paths to 'greenness'. We use discovery and self-awareness to look for energy efficiency.'

He continues: 'At each node, you have a certain amount of observation going on And that information is fed back to the traffic being carried. The sources of information look for the best course of action. And they have to make their own compromises.' Such local interactions can be found in models of cancer taking hold in a cell.

According to Gelenbe: 'The different proteins in a cell influence each other in complicated feedback paths. If some patterns are blocked, some things don't happen and those that shouldn't take place do. In cancer, the signal that says 'reproduce' is on all of the time. So the cell is reproducing all the time.

'In computers, you see the same sort of behaviour. Local processing generates further information. The processors do work then stop and ask 'what's next?'. They do more, generate more information and do more processing.'

The concept can even be extended to simulate and use social behaviour in information processing, something that Gelenbe is putting to use in a European Union research programme called The Social Computer. This project describes itself as 'a future computational system that harnesses the innate problem solving, action and information gathering powers of humans and the environments in which they live in order to tackle large scale social problems that are beyond our current capabilities.

The hardware of a social computer is supplied by people's brains and bodies, the environment where they live, including artifacts, e.g., buildings and roads, sensors into the environment, networks and computers; while the software is the people's minds, laws, organisational and social rules, social conventions, and computer software.'

'It looks at social interactions as computational problems,' Gelenbe adds. 'How you achieve certain social outcomes. What are the inputs? For example, if you take the issue of tuition fees, what sets of rules do you need to put into a system such that you can guarantee increased social mobility? What are the rules you need in a system that is generally ungovernable.'

Gelenbe has had first-hand experience of how changes in rules can result in unexpected social outcomes without the benefit of using models and simulation. After he moved from the US to Europe in the 1970s, Gelenbe worked as a science adviser to the French government: 'We introduced computer-science education across the whole undergraduate curriculum. I had to get the funding for that.' The problem caused by small actions in large, practically infinite systems provides a broader backdrop to this research: 'What is fascinating is this idea of working with systems that are infinite or unbounded. When one tries to build a mathematical model of something, one tries to specify everything. You can get out of this a little by introducing probability here or there. We accept that some things are known and some are not.'

As things become better understood, the model changes, Gelenbe asserts. 'The question is how could one develop modelling methods that include known, partially unknown and unknown elements into the same framework. For example, the gene regulatory networks in cancer research, we have models but they are incomplete. We do not know all the feedback loops or even all of the components. The search in infinite environments paper is an example of that work. It all fits into how you can do things in a precise mathematical manner, but where you assume you do not know the rest.'

Gelenbe points out that pure mathematics cannot provide a solution but computer science can fill in the gaps. 'On a computer system, you can say 'I don't know'. You can't do that with an equation. There is a mathematics part to this but also a computer-science part related to how you express things changing over time through logical constructs.'

It is work that puts computer science at the heart of a new understanding in how huge, practically infinite systems work, and can be made to work better. *

Further information

Share |

Curriculum Vitae: Professor Erol Gelenbe

2003-present

Professor in the Dennis Gabor Chair at Imperial College, London. head of intelligent systems and Networks Section.

1998-2003

University chair professor, director of the School of EECS, and associate dean of engineering, at the University of Central Florida.

1993-1998

Nello L. Teer professor (endowed chair) of engineering and computer science, and chair of ECE department, Duke University.

1991-1993

New Jersey State endowed chair of computer science, NJIT.

1984-1986

Science and technology adviser to French Cabinet Minister for Universities, on secondment from University of Paris Orsay.

1979-2002

Associate professor then professor of computer science (professeur de lère classe), University of Paris Orsay (France); moved to Universite Paris V (René Descartes) in 1986; promoted to professeur de classe exceptionnelle in 1991, and to 2è échelon (highest academic rank in France) in 2002. On leave of absence from the University of Paris 1991-2005.

1979-1987

Mâitre de conférences à mi-temps (half-time lecturer) in applied mathematics, Ecole Polytechnique.

1974-1980

Professor of computer science, University of Liège.

1972-1973

Research engineer.

1973-1982

Research Group Leader (part-time) at INRIA.

1973-1974

Mâitre de conférences associé, Université Paris Nord.

1970-1974

Assistant professor of computer, information and control engineering, University of Michigan, Ann Arbor.

Career Moves: Going mobile

Professor Erol Gelenbe has enjoyed an international career in academia – first moving from his native Turkey to the US in the late 1960s when he was awarded a Fulbright Fellowship to study in New York for a PhD in automata theory, completed in just two years.

“I was asked to give a presentation to the review board, and in that room was the inventor of fuzzy logic Lotfi Zadeh,” Gelenbe recalls. The two became friends; Zadeh helped Gelenbe move to his first research position at the University of Michigan at Ann Arbor where he met a number of pioneers in information technology such as John Holland, who developed genetic algorithms, and Arthur Burks, one of the developers of the Eniac computer.

After five years in the US as a Fulbright fellow, Gelenbe had to move out of the country, and took a leave of absence from Michigan initially to go to Belgium. His Turkish citizenship proved problematic for the Belgian authorities, so he diverted to France.
In 1993, Gelenbe moved back to the US, first to Duke University and on to the University of Central Florida as director of the electrical engineering and computer science school. Following this more administrative role, he welcomed the opportunity to move to a research-focused position in the UK early in the last decade. Professor Igor Aleksander, who developed a widely used model for neural networks, retired from the Dennis Gabor chair at Imperial College, London in 2002. Gelenbe became his replacement, selected from a very short list.

Related forum discussions
forum comment To start a discussion topic about this article, please log in or register.    

Latest Issue

E&T cover image 1409

"Who's getting the best engineering education? And what did your careers advisor suggest you do when you leave school?"

E&T jobs

E&T Marketplace

The essential source of engineering products and suppliers.

E&T podcast

Tune into our latest podcast

iTunes logo

Subscribe

Choose the way you would like to access the latest news and developments in your field.

Subscribe to E&T