How realistic is the engineering behind movie robot WALL-E?
Patrick: I guess the question that the 'Wall-E' movie raises, from an engineering standpoint, is how much of that might ever be possible? Are we heading towards a future in which our thoughtless over-consumption forces us to rely on machines to clean up our mistakes? Well, film makers have a hard time dealing with the cold realities of research into artificial intelligence. You can film a dustbin on a pedal car, make it beep and christen it 'RnDm', but it gets us no nearer knowing how to build human-like behaviour.
Hollywood has always had a big problem portraying how real computing systems operate in general...screen graphics get used to explain the impenetrable plot action, with enormous text shouting 'Alert - Incoming warheads!', or showing vacuous CAD imagery - a decade out of date. Despite strong historical links to Apple Computer Inc, Pixar is not really in the business of describing the state of the art.
Sure, we can make specialist devices that will cut the grass, drive around the desert or say 'hi' to conference-goers ...but ask them to undertake real-world flexible tasks, and they always fail. Just being able to fetch the newspaper from a shop, even when it's raining or cars get in the way, would be a major technical triumph.
There is a school of thought saying that, if machines like Wall-E don't care about things, they won't be able to make any choices. We may therefore need to equip them with ways to have feelings. This raises one of the hardest problems: will machines be, or need to be, conscious? Will they feel as people do and how might we tell?
In '2010', by Arthur C Clarke, humans find themselves having to trust their supercomputer, Hal, to save them by its own willingness to self-sacrifice. At London's Science Museum, a new exhibition currently features robots that seem to react 'emotionally' to the faces of human visitors. As for computers falling in love, I think we can forget about robots spontaneously developing emotions just by repeatedly watching tapes of 'Hello Dolly!'.
Some people think that humans have become as lazy and machine-dependent as those in 'Wall-E', and that maybe we just want robots as slaves. Huge effort is currently being expended to make machines capable of tending to bedridden old people who find themselves without the family-based support that was the default expectation a generation ago.
In terms of cleaning up the planet, let's look at the Mars rovers. These are an engineering tour de force, without question. They are, however, incapable of making decisions for themselves in connection with anything other than highly structured, routine tasks. I don't fancy their chances of dealing with any of the chaos that humans leave in their wake and it's not in the least clear how to upgrade them incrementally. Even as I write this, however, robots are being used to help decommission the Dounreay nuclear power station. Some of these are actually programmed to deal with some situations they encounter by themselves and are therefore more than just remote-control vehicles steered by people.
Making such cleanup machines capable of self-repair is hard to imagine. Arranging for 700 years of such maintenance and improvement to occur autonomously is a pretty daunting task. Nature opted therefore for reproduction on a fast timescale, rather than large-scale repair.
There is a strengthening opinion that to get human-like robotic performance we will need something called 'the singularity' to occur. That's when unprecedented technological progress is supposed to take place, driven by the sudden ability of machines to improve themselves and make their inventors redundant. I think this is pie in the sky and we need to look in a different direction to be able to engineer Wall-E and co.
That involves understanding more about the only systems known to display general flexibility and problem-solving intelligence: flesh-and-blood ones.
Mark: I really cannot envisage a time (in my lifetime anyway) where a computer, alone, could develop true AI - having the ability to think at a proper human level.
In the early stages of the PC, computer revolution expectations were high in the field of AI, as many naïve comparisons were drawn about the similarities of how a computer and the human brain worked. With RAM being the 'conscious mind' and ROM 'our stored memories', they likened the computer boot-up stage to waking up in the morning. You know, that time of awaking, where you do not have conscious awareness of the external environment, or your name, for a moment, until your RAM memory uploads, but this is where the similarity stops, and it's a big stop. Computers are essentially, at their primary level, machines that can identify if something is on or off, equalling '0' or '1', aren't they? Or am I the one being naïve now?
Patrick: You are exactly right Mark, but don't forget that our brains contain thousands of millions of cells (neurons) which, at any given moment, are either firing or not firing nerve impulses from one to the next. Some people believe that the neuron/ memory transistor analogy is a valid one and that we are 'just' very big collections of switches, cleverly connected.
Mark: Much has been promised on the pursuit of AI but, in reality, they have only just scratched the surface. I agree with the opinion: to realise AI, a new fundamental idea/ingredient is needed or, as you put it, for a 'singularity' to occur.
That said, recently I was interested to see that there had been an apparent breakthrough (albeit a compromise between natural and artificial intelligence) to AI, called 'RoboRat' - or Gordon to his friends - at the University of Reading. Apparently, they stitched together cultured neurons from the living brain tissue of a rat - quite ironic really, if you remember my visceral fear of rats [see 'Inventors Inbox', issue 14] that, one day, a RoboRat may become my trusted servant - and its nerve cells are laid out onto a nutrient-rich medium across an array of electrodes serving as the interface between the living tissue and machine. The brain sends electrical impulses to drive the wheels of the robot, and receives impulses delivered via sensors that react to its environment, so it has learnt how to direct its movement and control itself.
This is pioneering stuff that should help determine how memories are stored in a biological brain. It could help to combat neurodegenerative diseases, such as Parkinson's and Alzheimer's, but it is the potential spin-off of this technology that really interests me. Forget the ethical reasons for a moment and you are looking at creating a sub-human cyborg - a flesh-and-blood-with-wires Wall-E! I would start stitching those neurons now, as it may take some time. A giant step, or not, for mankind?
Patrick: There is at least one cyborg professor who has injected himself with smart electronic implants - and my mother-in-law carries a life preserving, adaptive defibrillator system in her chest.
It used to be thought that we could build smart machines by discovering a universal, rule-based logic engine that would allow us to combine human ingenuity with machine speed and reliability. Alan Turing, whom some see as the father of AI, said that by about 2000 it would be possible to program a (rule-following) computer so that it could converse almost as well as a person.
Extracting reliable data from the complex real world is still a major hurdle. When that has been achieved, reasoning about the stuff which has been detected will be comparatively easy. Although we like to come up with something new between us, inventing some development in AI is not a back-of-the-envelope task... so here's one I made earlier: see www.scenereader.com, a software system which can react to signs in complex scenes.
It seems to me that humans are a rule-following species - just like machines, but that each of our internal programs can achieve intuitive leaps, insights and creative ideas by accepting as input data in many different forms. For example, a certain spreadsheet of numerical data can be interpreted visually, from across the room, as a portrait of Einstein. You can, however, rest assured, Mark, that inventors will be the last people to find their skills overtaken by computers.
Mark Sheahan - www.squeezeopen.com [new window]
Patrick Andrews - http://iotd.patrickandrews.com [new window]
A search carried out by the British Library Research Service (www.bl.uk/research [new window]) on 'artificial intelligence' revealed X patents EP1083488, US2003074337, FR2616563, US2003009590, US2006200433 and US6678667 which can be viewed on Espacenet. Readers can send their own thoughts to email@example.com.