The rise of the thinking machines

31 March 2014
By Edd Gent
Share |
Cognitive computers like IBM's Watson could soon be carrying out some of the tasks currently carried out by skilled professionals

Cognitive computers like IBM's Watson could soon be carrying out some of the tasks currently carried out by skilled professionals 

At the intersection of Big Data and artificial intelligence computers are quickly beginning to rival the decision making power of humans. While the technology has the capacity to offer vast improvements in precision and efficiency it also raises questions about how much responsibility should be ceded to machines and what humans’ role will be in the future workplace.

In a whitepaper released last month, IT professional body BCS highlighted cognitive computing as one of the next great waves of computing, but one that also has the potential to become the most controversial technology in the world by the end of the decade. Using a combination of artificial intelligence and machine learning, cognitive systems continually learn from the data fed into them, refining the way they work, anticipating problems and inferring solutions.

While some proponents’ claims that they come close to ‘thinking’ in a way comparable to a human may be wide of the mark, their ability to digest and comprehend huge amounts of unstructured data in the form of video, images and, most of all, natural language means that in certain arenas they can far outperform people.

The first public demonstration of the technology’s potential was in 2011 when IBM’s Watson cognitive computer system, which was specifically developed to play American quiz show Jeopardy!, made the headlines by beating two former winners to take the show’s $1m prize. The demonstration proved a computer could reason creatively and IBM is now leveraging the technology for more practical purposes, creating a business unit around the Watson technology in January.

While, in its infancy, the technology has been mainly used to refine manufacturing processes and automate decision making in some areas of financial services, it is quickly progressing to the stage where its capabilities will be able to vie with humans with years of training and professional experience. Technology analysts Gartner forecast that by 2020 a majority of knowledge workers’ career paths will be disrupted by cognitive computing technology.

According to Rashik Parmar, president of IBM’s Academy of Technology, the first area likely to be impacted by cognitive computing is medicine. The average doctor has to read about 100,000 A4 pages of text a year to stay up to date in their field, he says, but by feeding this information into a machine that can trawl through the data in the blink of an eye doctors could save themselves thousands of man hours a year and make complicated diagnoses in seconds rather than days.

By combining this database of research with information from the tests done on patients, a cognitive computer could provide doctors with a series of potential diagnoses in a fraction of the time it would take a human. “It doesn’t replace the individual, it actually augments what he does,” says Parmar. “It provides almost a dialogue for the doctor to be able to make a more informed decision for that particular patient.”

Creating context

At present the main technological challenge faced by researchers is enabling the machines to create contextual models as a means to understand unstructured, often text-based data. “The biggest challenge is being able to ingest the info,” says Parmar. “If you take a basic report, just creating the linkages within the report is one challenge. Then you’ve got to create linkages across reports.” Currently machines come up with suggestions that have to be signed off by humans, but machine learning algorithms mean the devices constantly monitor the choices humans make, using this to improve their performance. The hope is that one day human input will no longer be needed.

As with most Big Data related fields, bandwidth and server load are also going to be key considerations for those designing cognitive systems. According to Parmar, the true advent of the cognitive computing age will come when the technology begins to be embedded in cloud services, but this will put huge strains on data handling capacities. One solution IBM is investigating utilises ‘edge computing’, in which some of the analytical work is pushed to the extremes of the network ensuring that only the most relevant data has to be transmitted back to the central servers.

While many of the technological questions have yet to be definitively answered, Parmar says we are not far from a world in which these cognitive capabilities will be available on smartphones or laptops. IBM announced last November that it would make an application program interface (API) for its Watson system available to developers and at last month’s Mobile World Congress event in Barcelona the firm announced a competition that will see three winners work with IBM to create prototype mobile apps using the Watson technology.

“The barrier for innovation then effectively comes down significantly, almost to the scale that kids coming out of university can start to use these types of capabilities to address challenges and issues in their own space or environment in ways you can’t imagine now,” says Parmar.

“You won’t see it. You will pay for something that makes life easier for you. The fact it is cognitive won’t matter to you as an individual. For the developer it will matter. They will be able to link into cognitive capabilities that make things much simpler.”

Dangers of delegation

While the benefits of such a cognitive computing revolution could be enormous, delegating increasing amounts of decision making responsibilities to machines raises a number of concerns. One area that has been touted as a future market for cognitive computing is financial services, but the 2010 Flash Crash – in which the Dow Jones Industrial Average plunged about 1,000 points only to recover within minutes – was triggered by a "high-frequency trading" computer algorithm.

For Professor Peter Millican, who teaches an undergraduate degree in Computer Science and Philosophy at the University of Oxford, this highlights the dangers of leaving machines to make economically important judgement calls.

“If you have computers that are automatically behaving in ways designed to maximise profits, you can end up with computers all over the globe acting in the same way, and these waves of fashion increase and multiply, all with tremendous speed,” he says.

Another example of the dangers of automating reasoning highlighted by Millican is the patent process. Drugs companies could use cognitive computers to devise new medicines by rapid trial-and-error research using genetic and biochemical data and rules about how basic elements of the drugs combine. This could tempt them to file patents in huge numbers, beyond their practical capacity to exploit them, potentially stifling medical innovation.

But for Millican, the main problem doesn’t lie with the decision making capabilities of cognitive computers. “I don’t think one should assume humans are that good at making decisions,” he says. “The idea that humans always produce better decisions and we should shy away from computers making them is a mistake.”

Instead, it is the application of the technology within society that is the problem. He likens it to nuclear energy, which holds both huge potential and huge pitfalls for mankind. Whether the technology leads to the salvation or damnation of the human race is determined by how it is used, which in turn is determined by the structures and institutions humans have put in place.

“As we go into this new world, we have got to rethink a lot of what we do. We can’t just bring in cognitive computing and leave everything else in place,” he says. “What is important is that we, as carefully thinking human beings, reorganise our institutions and ways of working so we don’t screw up because of these machines.”

There is also the danger that this technology could also put increasing amounts of influence into the hands of those specialists designing cognitive systems. “Of course the potential for abuse is there,” says Millican. “If you’ve got a group of experts writing the algorithms, how can you be confident they are writing them with the public interest at heart rather than some private interest? Anything that puts a lot of power into few hands is potentially dangerous.”

Evolution of the workplace

This transfer of power into the hands of a small group of specialists also highlights another issue. As computers become more and more capable they will increasingly infringe on roles currently occupied by humans, and as the technology matures and prices fall there is the danger that knowledge workers could face the same challenges being encountered by factory workers in the face of advances in robotics.

“I think there is the potential danger, almost for unrest, because with change comes great uncertainty and I don’t think that can be dismissed,” says the BCS’ director for professionalism Adam Thilthorpe. “But I don’t see mass unemployment. I think instead the technology will blend into other roles and areas. For example, it will become very difficult to be a great marketing person without being tech savvy.”

And despite the potential negatives, Thilthorpe says humans need these cognitive capabilities. “The exponential growth of data means we need computers to do some of the thinking for us.” The inevitable blurring of the lines between roles that this will entail will mean workers across departments will have to become more collaborative, both between humans and with machines.

For IBM’s Parmar, rather than skills “atrophying” to simply maintaining these thinking machines, two new broad categories of roles will develop – one exploiting the machines and the other constantly enhancing their capabilities.

“I think what happens here is the skill set of the individual moves to a new place,” he says. “While maybe the roles you have before will be replaced by the technology, you are creating two very different roles that contribute and complement each other to ensure the system delivers value.”

With computers doing the leg work, this will place greater emphasis on data science skills such as knowing how to design systems that can identify what they need to learn to solve a problem, how they can obtain the necessary data, and how they should use it. Thilthrope says business leaders are going to want to IT and computing professionals who can put them ahead of the technological curve.

“If you can’t disrupt your own business then someone else is going to do it for you,” he says. “Businesses that don’t know they’re in the technology game are going to get disrupted and cognitive computing is only going to speed that up.”

This places another burden of responsibility on IT and computing professionals, one of communication. “It’s IT and digital leaders who have to explain and communicate this to their organisations. We need to act as communication players so people understand the opportunities coming along.”

Share |
Related forum discussions
forum comment To start a discussion topic about this article, please log in or register.    

Latest Issue

E&T cover image 1409

"Who's getting the best engineering education? And what did your careers advisor suggest you do when you leave school?"

E&T jobs

E&T Marketplace

The essential source of engineering products and suppliers.

E&T podcast

Tune into our latest podcast

iTunes logo

Subscribe

Choose the way you would like to access the latest news and developments in your field.

Subscribe to E&T