Frankenstein's monster graphic

Frankenstein redux: Is modern science making a monster?

Could current experiments in science and technology lead to the creation of a modern-day Frankenstein’s monster?

A towering gargantuan beast with yellowed skin, shrivelled lips and sunken eyes, hiding in the shadows, waiting to squeeze the life out of any who cross its path - this is the creation of a man who played with science. It’s been 200 years since Mary Shelley planted the seeds that became the novel Frankenstein, and her ominous warning sounds louder than ever: do not interfere with what you do not understand.

We have come a long way from the idea of stitching together decomposing body parts and somehow ‘zapping’ them to life. Living things are no longer thought to be animated by ‘animal energy’ created by the soul. Yet scientists still attempt to recreate life in different forms, and several fields find themselves tarred with the ‘Frankenstein’ brush.

None more so than synthetic biology. In 1999, world-renowned synthetic biologist Craig Venter set the ball in motion by exclaiming “Shelley would have loved this!” when he announced plans to create the first synthetic biological genome. Later, in 2009, academic philosopher Henk van den Belt, in a paper published in Nanoethics, questioned whether synthetic biology could be accused of ‘Playing God in Frankenstein’s Footsteps’ by attempting to manipulate life.

Synthetic biology may seem the most obvious example, but as far back at the1950s, another major experimental field has also been criticised for trying to play god - artificial intelligence (AI). In 1950, Isaac Asimov coined the infamous ‘Frankenstein complex’ in his novel ‘I, Robot’, with one of the first predictions of a robotic Frankenstein’s monster. Several other authors and mainstream movie directors have since followed suit.

So why are we are still obsessed with Frankenstein? Because even with the strictest ethics and highest standard of care, things can, and do, go wrong. With advances in science and technology announced every day, the potential for disaster seems closer than ever.

How worried should we be about the scientific fields that attempt to create new life? Is there any real chance that someone could create a modern-day version of the Creature that so plagued Victor Frankenstein’s existence? Let’s pause to consider what could happen if things went awry.

Synthetic Biology

This field has probably received the brunt of Frankenstein comparisons over the years. There have been countless stories in the media of modern-day Frankenstein experiments, and scientists attempting to artificially create and manipulate living organisms. Just how justified is this comparison, and should we actually be afraid?

On the whole, synthetic biology is about engineering natural science to make it better, or more useful. In fact, many practitioners in this field are not life scientists by training, but engineers who have crossed over into the area.

Contrary to what the media may have you believe, says Richard Hammond, head of synthetic biology at Cambridge Consultants, the synbio community are not mad scientists in labs. Rather, “the intent of most people working in the field is to improve things in some way,” he comments. “There are very real and difficult problems in the world that people are trying to solve.”

To do this, synthetic biologists take natural molecules and reassemble them to create systems that act unnaturally. Manipulating organisms in this way can be put to a whole host of uses, from diagnostics to creating micro-factories, in the form of ‘reprogrammed’ cells that produce drugs and other chemicals. In the past, synthetic biologists have produced diagnostic tools for diseases such as HIV and hepatitis viruses.

Of course, there are risks with using technology of this kind, but these are no different from those posed by any other type of scientific research. Sometimes, experiments do not go as planned, but provided they are carried out under controlled circumstances, this shouldn’t be a problem.

“The issue is that fear dominates the conversation” says Rob Carlson, director of Bioeconomy Capital. “There are many countries in the world in which scary stories about synthetic biology or about genetically modified organisms completely overshadow any fact that might be available.”

One concern that scientists have is the potential mismatch between the knowledge of how biological systems really work and the ability to make changes to them. The gene-editing techniques that science has are almost all derived from nature, but small changes to the biochemistry have produced increasingly powerful tools.

Discovered in 2012, the Crispr-Cas9 system allows precision editing of all kinds of cells - bacteria, plant and animal - with little risk of the edits turning up in the wrong part of the genome. It’s proved successful in lab studies at addressing hereditary diseases and conditions by editing out genetic mutations in genomes. This has included lab studies in which mice genomes were successfully edited to correct mutations that cause the metabolic disorder hereditary tyrosinemia in humans.

However, with this increased confidence over editing prowess comes risk, says Seth Goldstein, associate professor in computer science at Carnegie Mellon University, who is particularly concerned by recent reports that scientists in China have used Crispr-Cas9 to edit non-viable human embryos to make them resistant to HIV infection.

“The news that the Chinese have recently used Crispr-Cas9 to modify a human embryo is one of the scariest things I have read in the recent past,” he says. “We think we know more than we do and we are going to start selecting for our children based on things we don’t really understand.”

The combined result of all these concerns is that research into synthetic biology is subject to huge constraints in terms of restrictions and regulations, which not only prevent dangerous materials from entering the environment, but also stop potentially lifesaving applications from making it into the field.

In 2012, researchers from Cambridge University announced that they had developed biosensors for use in detecting arsenic in groundwater, a blessing for countries such as Bangladesh, where contaminated water causes serious problems for the population. The biosensor, developed by Dr Jim Ajioka and Dr Jim Haseloff, is cheap, non-toxic and easy to use, but the project has stalled because the sensor is not approved in Europe.

“The European Commission are essentially blocking its introduction because they don’t have any proper mechanisms for dealing with this kind of innovation,’ says Richard Kitney, professor of biomedical systems engineering at Imperial College London.

Projects like this don’t only come up against governmental blocks. “A lot of NGOs out there take the attitude of ‘this could be dangerous so we'd better not do it’” says Kitney. “What they don’t do is to take into account the risk of doing nothing. There are literally thousands of people in Bangladesh that are dying, or seriously disfigured, by drinking arsenic in groundwater.”

Programmable matter

For the Symbrion research project, researchers from the Bristol Robotics Laboratory and other groups designed cubic robots that could move and act individually but, when programmed to, would work together, even combining into a larger, more capable robot. The idea is the first step towards ‘programmable matter’.

Alan Winfield, researcher in cognitive robotics at the lab says the 10cm robots were “absurdly large” but they demonstrate what could be achieved with mechanical systems that cooperate with each other. “If you were to imagine shrinking those robots to things a fraction of a size of a sugar cube, and if you had hundreds of those, then you are getting something approaching what you could call programmable matter,” he says.

There are many potential applications of such technology. In disaster situations, swarms of microscopic robots could be sent into collapsed buildings to tend to injured survivors. Shrunk further, swarms could be used to perform medical procedures inside the human body, entering through a keyhole-sized incision.

Miniaturisation, though, is the problem. “If you want to have robots that are literally the size of a grain of sand they certainly can’t be made right now, and they probably can’t be made even in the foreseeable future,” says Winfield.

Programmable matter in fiction provides a potential monster: take the morphing T-1000 robot from the Terminator movies or Michael Crichton’s sentient and genocidal nanobot ‘swarms’ in his 2002 novel ‘Prey’. But, Winfield believes this kind of risk is largely hypothetical. It would be straightforward to introduce a kill switch to cause the individual robots to separate and go dormant, he argues. That assumes the assembled robot has a will of its own.

“Taking a whole bunch of cells and just gelling them together doesn’t make an intelligent thing,” he says. “Programmable matter is more akin to a sponge than an autonomous intelligent machine.”

If the kill switch fails, simply starving the creature would almost certainly disarm it.

“Most robots, including the ones in our projects, have a battery, which means that they have a fixed lifetime,” says Winfield. “Once the energy runs out then that’s it, the robot just stops working.”
In any case, Winfield argues, the situation would arise only if someone were to design a system of self-assembling robots that could replicate. That would be adding even more complexity to something that is already hard to miniaturise.

The nearest we are likely to get in even the medium-term is a robot probe sent to Mars or some other planet alongside a 3D printer and feedstock that would allow it to make repairs to itself in the field. This is the essence of John von Neumann’s concept of interstellar exploration: each robot making others to venture further. The ability to make a group of machines self-sufficient remains well into the distant future. The key will be to ensure that they do not replicate themselves into a threat.

Artificial Intelligence

Elon Musk is the backer of several ambitious technological ventures - not just electric cars, but spacecraft and a supersonic travel tube. However he is worried about the prospect of a technological monster: AI. He wrote on Twitter in 2014: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.”

Stephen Hawking has similar fears. In an interview with the BBC in 2014, he said that developing AI fully “could spell the end of the human race”. Although the primitive forms of AI developed so far have already proved very useful, Hawking says he fears the consequences of creating something that can match or surpass humans. None of the examples we have today come even close to being able to do this.

There are many examples of attempts to to create AI, from Google’s Go-playing AlphaGo computer program to Microsoft’s teenage Twitter chatbot Tay, but these don’t really live up to true ‘intelligence’ of the kind that Hawking and Musk are concerned about. AlphaGo has restricted capabilities outside of playing Go, and Tay’s skills are limited to regurgitating other Twitter users’ tweets.

But what could happen if a machine were to become truly intelligent, possibly even sentient?

One concern is that such a machine could bring about a ‘technological singularity’ - a hypothetical event in which AI becomes capable of autonomously building ever smarter and more powerful machines. It sounds scary, but many people working in robotics argue that real AI, the likes of which could bring about such a situation, is still a long way off, if indeed it can ever be realised.

Professor Robert Sparrow’s research interests at Monash University include applied ethics. He points out that most of the arguments in favour of achieving machine sentience have to do with comparing neurons in the brain with transistors on chips. The reality, he says, is far more complicated.

“If you look at what makes human beings tick when we are trying to repair them, when someone comes to a psychiatrist when they are mentally ill, we are completely clueless about how the brain works, and our treatments are laughably primitive,” he says. “If someone said to me that they are going to build a sentient robot in the next 20 years, I would be very surprised.”

Noel Sharkey, a computer scientist and co-director of the Foundation for Responsible Robotics, is also sceptical. “As a scientist I could never say never,” he says when pondering the question of whether we will ever achieve machine sentience. “I just don’t think that we have any handle on sentience. It seems to be quite a different thing from a program running on a non-living machine.”

However, Sparrow admits that the possibility of a sentient robot is very concerning.

“I think that potentially it is immensely dangerous,” he says. “Some of the people who believe that we are on the verge of creating machine consciousness themselves believe that this will make human beings obsolete, that we will quickly be outthought by our machines.”

Being forced into submission by a race of superior beings is an unsettling prospect and strikingly similar to Victor Frankenstein’s own fears. In the novel, Victor refuses to make a mate for his Creature for fear that their children might supersede the human race, describing the children as “a race of devils … who might make the very existence of the species of man a condition precarious and full of terror.”

“If you are looking for a contemporary Frankenstein it is artificial intelligence,” says Sparrow. “That is where people think that one day there is a chance that we will make something that will look back at us, or maybe even decide to wipe us all out.”

In the past, Elon Musk called for regulatory oversight into research into AI to make sure that no one does anything “very foolish”. This is exactly what Noel Sharkey and others from the Foundation for Responsible Robotics are attempting to implement, starting at a much simpler level than the search for true AI. “We must be very wary of the control that we cede to machines and always ensure human oversight,” says Sharkey.

“It would not take super-intelligent machines to take over the world. The natural stupidity of humans could grant too much control to dumb machines,” he says. “But I believe in humanity and our ability to stay in charge, providing that we begin to put good policies in place now by bringing all of the stakeholders together to discuss the common good.”

Shelley’s warning

In the most recent movie adaptation of Mary Shelley’s novel, Victor Frankenstein’s assistant Igor attempts to reassure a scared female acquaintance with the words: “Every day, science and technology changes the way we live our lives.” Yet he avoids declaring how the changes might affect humanity.

At the time when Shelley wrote Frankenstein, the Enlightenment - a time of rapid advances in science and technology - was drawing to a close. Her novel pointed to the problem of seeing every advance as inevitably a power for good. Victor only realises too late what he was trying to achieve: “I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.”

AI, synthetic biology and programmable matter all have the potential to change our lives, but they may contain the essence of monsters nobody wanted to create. The difference from the novel is that people are thinking of how it might go wrong - and how to prevent it. Just watch out for those who claim the results will naturally be beautiful.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close