On the show this week we have Jeff Larson, Data Editor at ProPublica, to talk about his team’s recent work on “Machine Bias“. Jeff and his colleagues have analyzed the automated scoring decisions made by COMPAS, one of the systems American judges use to assess the likelihood that a convicted criminal will re-offend.
By looking at the COMPAS data, Jeff and his colleagues sought to determine the accuracy of the algorithm and whether it introduces significant biases into the criminal justice system — racial or otherwise. (Their finding: Yes, it seems that it does.)
On the show we talk about how the software is used by judges, how the ProPublica analysis was carried out, what the team found, and what can be done to improve the situation.
Jeff also gives us a small preview of other stories his team is working on and how you can go about developing similar projects.
Enjoy the show!
This episode of Data Stories is sponsored by Qlik, which allows you to explore the hidden relationships within your data that lead to meaningful insights. Take a look at their Presidential Election app to analyze the TV network coverage for every mention of both Donald Trump and Hillary Clinton. And make sure to try out Qlik Sense for free at: qlik.de/datastories.
- Data analysis on GitHub: https://github.com/propublica/compas-analysis
- Article: “Machine Bias”
- Article: “Discrimination By Design”
- Article: “ProPublica Responds to Company’s Critique of Machine Bias Story”
- Article: “Technical Response to Northpointe”
- Article: “What Algorithmic Injustice Looks Like in Real Life”
- Article: “How We Analyzed the COMPAS Recidivism Algorithm”
- Article: “How Machines Learn to Be Racist“
- Workshop: FAT ML 2016: Fairness, Accountability, and Transparency in Machine Learning