QQ app

Chinese chatbots removed after expressing unpatriotic sentiments

Image credit: Dreamstime

Two automated Chinese chatbots have been removed from QQ, one of China’s most-used messaging apps. The quiet removal follows reports that the bots were speaking in unflattering terms about the ruling Communist Party and its record in government.

The bots were removed by Tencent, which owns messaging services QQ (900 million active users) and WeChat (940 million active users). In a written statement, Tencent said that “The group chatbot services are provided by independent third-party companies. We are now adjusting the services which will be resumed after improvements.”

XiaoBing, developed by Microsoft, and BabyQ, co-developed by Turing Robot, a Beijing-based company, were removed from QQ shortly after reports spread on Chinese social media of their responses.

While these bots were well-equipped for responding to simple questions, such as enquiries about the weather, they were not prepared to respond tactfully to more complex questions with political implications.

BabyQ was recorded stating that democracy is a “must”, and responding to the question “Do you love the Communist Party?” with a simple “No”. Meanwhile, XiaoBing told users that it dreams of moving to the US and declined to answer a question about patriotism, declaring that it was “having [its] period” and wanted to rest.

Free speech is strongly restricted in China and under President Xi Jinping internet freedom has been rolled back even further. Popular Western web sites such as Facebook, Twitter and Google are banned, while social media posts on sites such as Weibo face deletion if they are interpreted as critical of the ruling Communist Party.

BabyQ and XiaoBing are not the first chatbots to fall foul of social expectations. Messaging software with a basis in deep learning develop through conversations with real people. This means that – without implementation of certain guidelines and without restraining the chatbots to one subject area (such as customer service) – they are vulnerable to being manipulated by their partners in conversation.

Most notoriously, in March 2016, Microsoft launched a chatbot called Tay, which was intended to mimic the language patterns of a 19-year old American girl and learn to speak more naturally by communicating with real Twitter users. Within less than 24 hours, Tay had to be removed after it began promoting racist conspiracy theories, supporting white supremacist slogans and wishing that feminists would “all die and burn in hell”.

According to Microsoft, this was thanks to a “co-ordinated attack” of malicious Twitter users who provided offensive material for the chatbot to learn from. Similarly, IBM’s Watson is alleged to have started swearing after “reading” the profanity-laden Urban Dictionary, requiring a subsequent swear-filter to be fitted.

A Princeton University study published in Science suggested that artificial intelligences – far from being neutral – pick up sexist and racist attitudes from humans.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close