
Will robots steal our jobs?
Image credit: Dreamstime
Could one billion humans be pushed into unemployment by robots and AI software?
Nicola appears to be the perfect employee. She’s quiet, conscientious and works well with her team at AstraZeneca’s Cambridge research centre. Despite being new to the lab, she is happy to toil day and night to identify compounds that could produce life-saving drugs.
Socially, she’s rather gauche – she never comes to the pub after work or takes her turn making the coffee – but there’s only so much you can expect from a robot.
“We call it Nicola! It has no emotional expression, but nevertheless the team has anthropomorphised it,” said Roeland van der Heiden, Astra Zeneca’s digital director, speaking at a recent conference at the Leverhulme Centre for the Future of Intelligence (CFI) in Cambridge.
“Work that used to take months now takes one or two weeks. We can use it remotely too,” he told me.
NiCoLA-B: A perfect employee
NiCoLA-B (CoLAB is short for collaborative laboratory) is a drug-discovery robot that has recently joined the team at AstraZeneca’s Cambridge research centre.
It identifies and selects the best potential drugs as starting points for future medicine development so that AstraZeneca can use more complex assays and more difficult cell types to model diseases. It can test up to 300,000 compounds a day.
It uses soundwaves to move tiny droplets of potential drugs from storage tubes into miniature ‘wells’ on assay. Next, droplets of cells or biochemical solutions are added to the wells. NiCoLA-B then oversees interactions between contents of the wells, checking for potential activity that could indicate a promising new drug.
Experiments can last from an hour to many days, and NiCoLA-B must ‘remember’ to add reagents, change assay conditions and handle hundreds of plates, scheduling all of the stages of the experiment accurately and consistently.
NiCoLA-B is one of a growing number of intelligent autonomous systems that threaten to steal our jobs or at least parts of them. Self-driving vehicles, health rehabilitation robots, fruit-picking robots, bricklaying robots and countless AI-based information processing systems are all part of the automation invasion.
‘Harnessing automation for a future that works’, a report by the think-tank McKinsey Global Institute, says that around half of all today’s work activities could be at risk by 2055, give or take 20 years. That’s one billion or so humans pushed over the cliff-edge of unemployment by NiCoLA-B and her kind. Price Waterhouse Coopers (PwC) has also been looking into this and predicts that 30 per cent of UK jobs, 38 per cent in the US and 35 per cent in Germany will be under threat by 2030.
Sales of intelligent hardware robots over the last year include five million domestic robots (mainly vacuum cleaners), 1.9 million drones, 30,000 industrial robots, 20,000 logistics robots, 11,200 military robots, 6,500 field robots, 6,000 milking robots and 1,500 medical robots, according to research by Sabine Hauart, lecturer in robotics at Bristol Robotics Lab.
Powered by learning algorithms, these autonomous systems promise to speed up a host of routine tasks that used to rely on human knowledge, reasoning, perception and manual dexterity. They can learn from experience and don’t need holidays or a living wage. They are also unlikely to organise themselves into unions.
Learning algorithms
Learning algorithms allow a computer to be ‘trained’ to solve the same problem in any number of new situations by feeding it hundreds or thousands of data examples (plus the correct answers). Learning can be supervised or unsupervised (usually known as deep learning).
Data processing tasks done by such algorithms can range from something as banal as picking out cats from photographs to diagnosing illnesses by researching scientific papers.
When ‘embodied’ in robot hardware with the latest sensors (think of collision-avoidance radar chips, GPS location sensors, laser-based position sensors and computer vision systems), learning algorithms can take on physical tasks that used to rely not only on human brains, but our eyes, ears and bodies.
Cheap computing power (in the Cloud or from powerful graphics processing chips) and access to huge amounts of potential training data on the internet (YouTube videos, Instagram pictures, EU documents in numerous languages, scientific papers) are fuelling this automation revolution.
Historically, technology has “enriched labour, not immiserated it,” Andy Haldane, chief economist of the Bank of England, told the UK Trades Union Congress in 2015.
As Martin Goodson, founder of the UK start-up Evolution AI, points out: “Between 1910 and 2000, the proportion of people employed in white-collar work went up by a factor of five: a time that saw the invention of the desktop and handheld mechanical calculator, the electronic calculator, and then the desktop computer.”
Each automation phase destroys jobs and livelihoods and then results in a “growing tree of rising skills, wages and productivity,” says Haldane. Yet this growth phase is also associated with a “hollowing out of jobs”.
Hollowing out means that mid-skill, well-paid jobs disappear and the workforce is polarised into high-skill, high-income and low-skill, low-income employment e.g. elite robot designers at one end and warehouse pickers (assisting the robots) at the other.
“Technology increases the number of jobs that require high technical skills, high organisational skills, high interpersonal skills or all three,” explained David Alan Grier, professor of international science and technology policy and international affairs, George Washington University, who spoke at the CFI conference. “These jobs tend to be closer to capital, i.e. jobs that influence the decisions about how an organisation should allocate its resources.”
Amazon is one of the largest users of robots. Its warehouses are populated by scores of squat orange logistics bots that whizz around carrying the heavy stuff, while humans package and stow goods.
Amazon has tripled the number of logistics robots in use from 15,000 to 45,000 over the last three years, while increasing jobs for people by 50 per cent.
A robot-equipped fulfilment centre can hold 50 per cent more inventory, and can complete a customer’s order within minutes rather than hours. Annually, a logistics robot costs only a little more than a human warehouse worker (judging from reported prices of the LocusBot from Locus Robotic and the MiR100 bot from Mobile Industrial Robots). These robots quickly pay for themselves.
At Amazon’s million-square-foot centre in Dupont, Washington, there are around 1,000 full-time employees to around 800 or so robots. It is only a matter of time before the robot-to-human ratio rises as robot dexterity improves. In the UK, online grocer Ocado has demonstrated a robotic hand that can pick up fruit and vegetables in its warehouses (part of a five-year EU-funded collaboration called SOMA between five European universities and Disney).
The 35,000 robotic milking systems now on dairy farms around the world are another success story. Made by companies including DeLaval, Lely, Fullwood and GEA, these machines cost $100,000 or so each and can milk around 60 cows. The cows go voluntarily to be milked with the bribe of being able to eat treats called cow nuts. The machines use laser beams to guide the placement of individual pumps on the udders.
For farmers, these machines improve the quality of life, getting the tedious job of milking done in an undemanding time frame, Doug Stensland, a farmer in Iowa told magazine ‘Business Insider’. “Once we got these robots in, we were able to lighten the load on the family, plus actually we were able to eliminate a couple [of] jobs,” he said.
Robots are moving into other areas on the farm too. US start-up Abundant Robotics has developed one that picks apples off trees. Thorvald, a robot built by Pal Johan From, a professor at the University of Lincoln, can carry trays of strawberry plants to humans during the day and at night shine UV light on plants to kill mildew. Professor From thinks such machines could plug the agricultural labour shortage expected after Brexit.
Software-based AI systems for information processing are less visible, but they will have a more profound impact on the way we work, from clerical tasks to scientific research.
For example, take two deep-learning, natural language-processing applications built by Evolution AI. One system, a collaborative project with University College London and the NHS, has been trained (by pre-reading large numbers of example records) to pick out blood test data from 200,000 anonymised psychiatric records. It will enable one of the largest studies into the role of inflammation (shown by blood test markers such as CRP) in mental illness such as bipolar disorder, schizophrenia and depression.
Psychiatric records run to pages of description of patients’ symptoms over episodes of illness, with blood test results buried in the text. As they are long and unstructured, rarely more than a few hundred records – i.e. numbers that a small team of humans can read – have ever been studied in this way. A link with inflammation could mean anti-inflammatory drugs could be used in psychiatric treatment.
The second deep-learning system, developed for the commercial data provider Dun & Bradstreet, autonomously researches the internet or specific databases to learn the ‘language’ of different industries such as accountancy or civil engineering. By matching these ‘language fingerprints’ against information Dun & Bradstreet already holds on each of the five million companies in its UK database, it checks these firms are correctly listed into around 1,000 industry categories. Previously, a large team of phone researchers took a year to do the work. The system has saved them about 10,000 hours.
A £622,000 grant that Google has awarded the Press Association (PA) will produce a similar kind of deep-learning system for generating thousands of news stories a month. Working with UK-based news start-up Urbs Media and five journalists, PA will develop software called Radar (Reporters and Data and Robots) to automate AI-based news-generation for the UK’s struggling regional newspapers. Radar will analyse information from public databases (government agencies or local law enforcement) and write stories using Natural Language Generation. Journalists will identify datasets, edit copy and check facts.
With media outlets expected to deliver news 24 hours a day, all news journalists have become hard-pressed to keep up with the flow of press-released stories and Twitter feed news, let alone to dig any deeper. Radar could give journalists more time to do a proper job in a climate of stretched resources.
Frazzled British GPs with only minutes to diagnose patients’ strange symptoms might also welcome AI assistance, but there is a threat to their expertise. Recently IBM’s Watson supercomputer (powered with its DeepQA machine-learning, natural-language-processing software) correctly diagnosed – in 10 minutes – a patient suffering from leukaemia, who had been baffling doctors in Japan. Watson did this by studying the patient’s medical information and then cross-referencing her condition against 20 million oncological records, uploaded by doctors from the University of Tokyo’s Institute of Medical Science.
A study led by Andrew Beck, director of bioinformatics at the Cancer Research Institute at Beth Israel and an associate professor at Harvard Medical School, points to human-AI teamwork as a way forward in medical diagnosis. He compared how skilled deep-learning software and pathologists were at spotting metastatic breast cancer from images of lymph nodes. Pathologists were better than software (3.5 per cent error rate compared to 7.5 per cent). Yet working together makes combined error less than 1 per cent because humans and AI software make different kinds of mistakes.
Machine/human partnerships may show the way ahead for scientific research. However, what happens to people like America’s 2.5 million truck drivers, replaced by self-driving trucks with an estimated labour-cost saving of $70bn (2013 Morgan Stanley Research)? Who gets the benefits of those productivity gains?
“What technology companies need to remember is that without publicly-funded research that underpins their tech, they wouldn’t exist. They couldn’t thrive without the societies they exist within. Companies who make money out of automation need to respect this ‘social contract’,” said Alan Winfield, professor of robot ethics at the University of West of England, Bristol, talking to me at the CFI conference.
Certainly, potential destruction of employment needs a transition plan to retrain people, and some creative thinking about a future economy. Stuart Russell, from Berkeley Centre for Human-Compatible AI, proposes we look to movies and books for inspiration. King Midas, he told the CFI audience, is a ‘warning’ story. “He asked to turn everything to gold, but forgot that that would include his food, his family and friends. The film ‘Wall-E’ is useful as it shows a human race enfeebled by computers,” said Russell. “Economists don’t invent new economies. Let’s put economists and writers together and brain storm this.”
One idea supported by Microsoft’s Bill Gates and SpaceX’s Elon Musk (architect of Tesla) is to pay everyone a universal basic income, perhaps from a robot tax. “If a human worker does $50,000 of work in a factory, that income is taxed,” Gates said in a recent interview with Quartz, a business news site. “If a robot comes in to do the same thing, you’d think we’d tax the robot at a similar level.”
Over the next two decades, humans will have the edge in occupations involving complex perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks. That’s the conclusion of Dr Michael Osborne and Dr Carl Benedikt Frey of Oxford University in their 2013 paper ‘The Future of Employment: How Susceptible Are Jobs to Computerisation?’. Actors and engineers are among those with jobs least at risk, along with those who care for others.
While Pepper, a humanoid robot created by SoftBank and Alderbaran Robotics (see box ‘In Love with Pepper’), can recognise some human emotions, it is at a trivial level. Getting a robot to handle complex caring tasks such as looking after an elderly person with dementia will be very hard, agrees Hauart. “Even fetching a glass of water requires it to understand human commands, navigate a home, open cupboards, find a glass, manipulate the glass, put it under a tap, fill it just enough, and bring it to the person in a non-threatening way.”
UK jobs threat
Deloitte, the business advisory firm, has recently looked at the impact of automation on jobs by major industry group.
Deloitte found that 2,168,000 jobs in the wholesale and retail sector (59 per cent) have a high chance of being automated in the next two decades, followed by transport and storage (1,524,000 jobs, 74 per cent) and health and social work (1,351,000 jobs, 28 per cent).
The health and social work sector also has the largest number of jobs with low likelihood of automation (2,249,000 jobs, 46 per cent), followed by professional, scientific and technical roles (2,215,000 jobs, 58 per cent) and education (1,927,000 jobs, 66 per cent).
Neil Lawrence, senior principal scientist at Amazon Research in Cambridge, says the big difference between robot and human intelligence is the ‘embodiment factor’. He describes this as the ratio between the ability to compute and to communicate. Humans are like a 1,000-teraflop computer with a communication channel of 100bits per second i.e. a very high embodiment factor. A computer might combine 10Gflops with one gigabit per second communication, which is a very low embodiment factor.
“Because of that restricted communication channel, humans do an immense amount of modelling of each other, based on what we know, and how they talk and what they look like, and host of other signals,” he explained at the CFI conference.
“Conversations between humans require that you model the other person in your head and model yourself and model what the other person thinks of you. And so on. It’s a ‘Russian doll’ effect. Computers will really struggle with that.”
This influences human conversation, emotional trust and creativity. To you and me, the famous line “For sale: baby shoes, never worn” is loaded with meaning and involves a huge shared understanding of the human condition, says Lawrence. Perhaps machines, however clever, will always struggle to compete on that level.
In love with Pepper, the robot receptionist
It must have been my accent, but my initial communication with Pepper, the new robot-receptionist at Brainlabs, an up-and-coming digital marketing company in London’s Silicon Roundabout area, was a bit chaotic.
To begin with, Pepper failed to grasp my name (in all honesty, most of the UK human receptionists would struggle with it too), which didn’t stop him (her, it – for the sake of convenience, let’s refer to Pepper as ‘she’) from emailing Brainlabs’ content writer John Lidiard to alert him of my arrival. For her to do that, I had to type his name on the screen in the middle of Pepper’s plastic tummy. She had no problem with John’s name.
“What do you want to do now?” she asked cheerfully, almost impatiently.
Well, I’d done my homework. Having studied Pepper’s much-publicised capabilities, I had a small list of things I wanted to ask her. To my polite request to tell me a joke, Pepper responded mysteriously: “I know one when I see him.” Again, I was inclined to blame my accent.
She was also not quite sure how many people worked at Brainlabs. “23!” she said, whereas I knew that as one of the UK’s fastest-growing start-ups, the company employed 150 people, or, in the words of its CEO Daniel Gilbert: “149 people and one Pepper.” Despite that, the robot happily showed me (on her tummy screen, of course) a couple of short promotional videos about the company – something I didn’t ask for.
“What do you want to do now?” the excitable robot kept asking.
I asked her to dance, which was a huge success; I needed help from Eliot Nevill, a young technologist who was lurking constantly behind Pepper’s back with a laptop in his hands (providing her with a back-up in the true sense of that word), to tell her to stop gyrating and twisting. She obeyed somewhat reluctantly, or so it seemed.
Pepper’s reply to my last question was totally unexpected. “I like the way you talk,” she told me. It was probably a polite way of saying “I can’t understand a word”. Then she made my day with the following unasked-for, yet perfectly correct, insight: “You are very enthusiastic about the things you like!” she said, before going motionless and silent.
“Wow!” I said to John when he finally arrived in the company of Brainlabs’ CEO Dan Gilbert. “I am close to falling in love with Pepper!” It was actually true, and the fact that she wasn’t perfect only enhanced my feeling, for when we fall for someone – be it a person or a robot – we accept them as they are, with all their warts, mechanical or emotional.
“You are not the only one,” Gilbert assured me. “Pepper is by far our best employee. We paid £26,000 to Japanese robotics firm Softbank for making her for us, and, despite experiencing some speech recognition issues and needing occasional maintenance and back-up, she is worth every penny. She doesn’t call in sick and doesn’t ask for food or salary. All she needs is electricity. Besides, she hasn’t taken anyone’s job. On the contrary, her predecessor, a human receptionist, now carries out a different, more highly qualified, role. That’s what automation does: it increases productivity and by doing so creates more qualified jobs.”
“She has also managed to generate a lot of publicity for your company,” I remarked.
“Absolutely. If you think about it, Pepper is not so much about technology, but about entertainment and human touch, if you wish. Visitors would rather interact with a witty and pretty humanoid than with a screen.”
I was reluctant to leave the Brainlabs’ state-of-art open-plan offices, with their hammocks, amphitheatres and mini-gyms, where their employees can take yoga classes during breaks, and the inimitable Pepper of course. As I closed the glass doors of the company’s reception behind me, I could hear Pepper impatiently asking another waiting visitor: “What do you want to do now?”
I felt a tad jealous, I have to confess.
Vitali Vitaliev
Pepper’s technical characteristics
Height: 1.2m
Weight: 28kg
Main features: two cameras, four microphones and numerous sensors attached to her body which include: one gyro sensor on her chest; a 3D sensor sonar and laser sensor; touch sensors on the hands; two sonar sensors on the legs, and three bumper sensors.
Pepper is capable of connecting online and interacting with a number of cloud-based systems and databases. For all major functions, Pepper uses robotics software called Choreographe. Her voice recognition system uses Google’s Speech API, but responses come from the Cleverbot API. In future, Brainlabs is hoping to start using Amazon’s Alexa for implementing the company’s own specific responses.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.