Cortana icon on a Windows phone

Human-like virtual assistants embarrass some users, study shows

Image credit: Dreamstime

Virtual assistants have come a long way since the days of Microsoft’s cartoon paperclip, Clippy. However, a Korean study suggests that the increasingly human qualities of virtual assistants today may be unhelpful for some.

According to the prominent software designer Alan Cooper, Microsoft’s Clippy arose from the “tragic misunderstanding” of human-computer relations research conducted at Stanford University, which suggested that the areas of the brain associated with using a computer were also associated with emotional responses to other humans.

This led Microsoft to introduce the unpopular anthropomorphised character to their programs.

Research suggests that people tend to perceive computers as very limited social beings, which appears to render the machines user-friendly and unintimidating. However, a group of researchers based at Chungbuk National University, South Korea, began to ask whether this was true when people had to perform challenging tasks, such as those found on online learning platforms.

Online learning platforms often add humanlike features – such as animated cartoon assistants – to offer users some guidance through exercises.

The Chungbuk researchers gave 200 participants a linguistic intelligence test to complete on computers. When they reached more challenging problems, they were offered assistance from a ‘helper’ computer character - featuring a face and speech bubble - or a more realistic computer icon.

According to the researchers, the participants became more embarrassed and self-conscious when offered guidance by the anthropomorphised computer. This discomfort was not evident in participants who viewed intelligence as a trait that could be developed (known as the “growth mindset”).

“Anthropomorphic features may not prove beneficial in online learning settings, especially among individuals who believe their abilities are fixed and who thus worry about presenting themselves as incompetent to others,” said Professor Daeun Park, a psychologist at Chungbuk University who led the study.

“Our results reveal that participants who saw intelligence as fixed were less likely to seek help, even at the cost of lower performance.”

The researchers completed a second investigation, which required participants to complete a similar linguistic intelligence test with the option to ask for help, and found that even just a couple of anthropomorphic features in a virtual assistant were enough to provoke embarrassment in some participants, negatively affecting their performance.

“Educators and program designers should pay special attention to unintended meanings that arise from humanlike features embedded [in] online learning platforms,” Park concluded.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close