Machine learning decisions can be made both fair and accurate
A Nature Machine Intelligence paper written by Carnegie Mellon University researchers challenges the long-held assumption that there must always be a troubling trade-off between accuracy and fairness when applying machine learning to public policy decisions.
There is considerable alarm among academics and civil rights campaigners at the adoption of machine-learning tools in areas such as law enforcement, healthcare delivery, and recruitment, given that AI replicates and amplifies existing inequalities. For instance, an ACLU study using Amazon’s facial-recognition software to compare every member of the US Senate and House against a database of criminal mugshots disproportionately misidentified Black and Latino legislators as criminals.
Adjustments are made to training data, labels, model training, scoring systems, and other aspects of machine learning in an effort to iron out these biases; there is a theoretical assumption that these adjustments render a system less accurate.
A team of Carnegie Mellon researchers, who tested that assumption and found the supposed trade-off negligible, hope to dispel that assumption.
“You can actually get both [accuracy and fairness]. You don’t have to sacrifice accuracy to build systems that are fair and equitable,” said Professor Rayid Ghani. “But it does require you to deliberately design systems to be fair and equitable. Off-the-shelf systems won’t work.”
Ghani and his colleagues looked at scenarios in which limited in-demand resources are allocated using machine-learning systems: prioritising mental healthcare outreach based on risk of returning to jail, to reduce reincarceration; predicting safety violations to deploy a city’s housing inspectors most effectively; modelling risk of students failing to graduate high school in time to identify those most in need of additional support; and helping teachers reach crowdfunding goals for classroom needs.
In each scenario, the researchers found the models optimised for accuracy were effective at predicting the outcomes of interest but showed significant disparities in recommendations for interventions. However, when they applied adjustments to the outputs of the models that targeted improving fairness, they discovered that disparities based on race, age, or income could be removed without compromising accuracy.
The researchers hope their findings will start to change the minds of fellow researchers and policymakers as they consider how to apply machine learning to decision-making with real-world consequences.
“We want the artificial intelligence, computer science, and machine learning communities to stop accepting this assumption of a trade-off between accuracy and fairness and to start intentionally designing systems that maximise both,” said Kit Rodolfa, a Carnegie Mellon machine learning scientist. “We hope policymakers will embrace machine learning as a tool in their decision-making to help them achieve equitable outcomes.”
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.