
AI agent talks through its decisions in simple English
Image credit: Dreamstime
American academics have developed a machine-learning assistant, which is capable of explaining the ‘motivations’ behind its decisions in real time.
Machine-learning algorithms are being increasingly relied upon for important decision-making such as in medicine, law enforcement and finance. Many are concerned that these algorithms are effectively black boxes, which reach their conclusions through unknown and potentially obscure decision-making processes. This impenetrability is particularly concerning given widespread evidence that machine-learning systems such as facial recognition software replicate human bias against women and ethnic minorities.
In order to elucidate how machine-learning algorithms come to their conclusions, researchers from Georgia Institute of Technology, Cornell University, and the University of Kentucky have developed an agent which produces explanations for its decisions.
The agent is able to generate natural language explanations (or “rationales”) in real time, allowing non-technical users to comprehend its processes, and ensure that it is performing tasks correctly. The researchers hope that this will allow the system to be relatable and trustworthy.
“If the power of AI is to be democractised, it needs to be accessible to anyone regardless of their technical abilities,” said Upol Ehsan, lead researcher, and PhD student at Georgia Tech’s School of Interactive Computing. “As AI pervades all aspects of our lives, there is a distinct need for human-centred AI design that makes black-boxed AI systems explainable to everyday users. Our work takes a formative step toward understanding the role of language-based explanations and how humans perceive them.”
To test the agent, the researchers recruited participants to watch the agent play a game of Frogger (a classic arcade game) and rank different explanations behind each move: one written by a person, one generated by the agent, and one randomly generated. The participants ranked the responses on confidence (in ability), human-likeness, adequate justification, and understandability.
The participants – while preferring the human-written explanation – ranked the AI’s responses as a close second. The most popular AI responses were those which recognised context, acknowledged and adjusted for upcoming dangers, and which were adaptable.
“This project is more about understanding human perceptions and preferences of these AI systems than it is about building new technologies,” Ehsan said. “At the heart of explainability is sense making. We are trying to understand that human factor.”
In a second, related study, the researchers tested preferences for concise and focused explanations versus ‘complete picture’ rationales, and found that participants strongly preferred explanations which acknowledged context.
In the future, the researchers hope to apply their findings to various types of AI agent, such as ‘intelligent’ social companion robots.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.