Blind trust in machines can be risky, researcher warns
Image credit: Dreamstime
According to a study conducted by Penn State University researchers, people tend to trust machines more than people to handle their private information. This trust can sometimes fail to account for dishonest motivations behind machine design, the researchers say.
Rather than hesitating to share sensitive information with faceless machines, people tend to be more likely to disclose this information to machines than to other humans, the study found.
In the Penn State University study, a group of researchers asked 160 participants to use either a human or bot to help them find and buy a plane ticket on the internet. After the human or bot assistant produced flight information, it asked the participants to share their credit card information. The researchers measured their trust of the machine or human assistants through a series of questions about their interactions. They found that people who trusted machines were significantly more likely to share their credit card information with the bot than the person, while there was no difference for people who did not report high trust in the machines.
According to Professor S. Shyam Sundar, co-director of the Media Effects Research Laboratory, this may be informed by a bias that machines are more trustworthy and secure than humans. “This tendency to trust the machine agent more than the human agent was much stronger for people who were high on the belief in the machine heuristic. For people who did not believe in the machine heuristic, it didn’t make a difference whether the travel agent was a machine or a human,” he explained.
Sundar suggests that the presence of the bot on the interface was a cue for this ingrained belief that machines are more trustworthy: a feeling that may come from the idea that machines are ‘lawful’, do not gossip and do not have their own selfish motivations for collecting information.
While this trust could be useful in designing user-friendly interfaces which make people feel comfortable in sharing sensitive and necessary information online - such as to complete payments - Sundar warns people against putting blind faith in technology. Behind every apparently faceless machine interface are the human software engineers who designed them - a handful of whom may well have dishonest motivations for extracting information from others, such as through phishing scams.
“This study should serve as a warning for people to be aware of how they interact online,” Sundar said. “People should be aware that they may have a blind belief in machine superiority. They should watch themselves when they engage online with robotic interfaces.”
Sundar added that even people who report a high degree of trust in machines only need small nudges to be reminded that they are interacting with a machine: “One thing I would like to stress is that the designers have to be ethical. They should not unethically try to extract information from unsuspecting consumers.”