Anthropomorphising AI could allow the human to be overlooked
Image credit: Obvious (collective)
MIT and Max Planck Institute researchers have investigated how AI-generated art is perceived, finding that people who humanise AI more are likely to overlook the involvement of human artists.
Machine-learning algorithms have been used to create digital art, compose music and write poetry and prose. In October 2018, a portrait created by a French art collective using a generative adversarial network (which had been trained using paintings from human painters) was auctioned for $432,500 at Christie’s, raising the question of whether the art was created by AI or by humans using AI as a tool.
“Many people are involved in AI art: artists, curators and programmers alike,” said Ziv Epstein, a PhD student at the MIT Media Lab. “At the same time, there is a tendency – especially in the media – to endow AI with human-like characteristics. According to the reports you read, creative AI autonomously creates ingenious works of art.
“We wanted to know whether there is a connection between this humanisation of AI and the question of who gets credit for AI art.”
Researchers from MIT and Max Planck Institute for Human Development informed 600 people about how AI art is created and asked who should receive recognition for the art. They also determined the extent to which each participant humanised AI (rather than perceiving it as a tool).
The researchers found that the people who humanise AI tend to feel that the AI should receive recognition for the art it generates, rather than the humans involved in the creation process.
When asked about which humans deserve most recognition in the creation of this art, recognition was given to the artists who trained the algorithms first; followed by the curators; then the engineers who developed the algorithms, and finally the ‘crowd’ of internet users which produced the data. Respondents who humanise AI more tend to give less recognition to the artists than those who perceive AI as a tool.
Another finding of the iScience study is that it is easy to manipulate the degree to which people humanise AI by changing the language used to report on AI systems in art. Explaining the process by saying that the AI, supported by a collaborator, conceives and creates new works of art tends to humanise the AI more. Describing the process as an artist giving the AI commands causes participants to view the AI more like a tool.
“Because AI is increasingly penetrating our society, we will have to pay more attention to who is responsible for what is created with AI,” said Iyad Rahwan, director of the Centre for Humans and Machines at the Max Planck Institute for Human Development. “In the end, there are humans behind every AI. This is particularly relevant when the AI malfunctions and causes damage – for example, in an accident involving an autonomous vehicle.
“It is therefore important to understand that language influences our view of AI and that a humanisation of AI leads to problems in assigning responsibility.”
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.