Fairness and Bias in Machine Learning Workshop
2021-02-26
Chapter 1 Fairness and Bias in Machine Learning Workshop
Overview
This workshops provides a gentle introduction to the fairness and bias in machine learning applications with a focus on the ProPublica’s Analysis of the COMPAS algorithm. We revised the ProPublica’s original R and Python code to increase its code readability, remixed it with other references, then published and deployed the revised notebook using bookdown and GitHub page.
Outline
- Bias in the data
- Risk of Recidivism Data
- Risk of Violent Recidivism Data
- Bias in the algorithm
References
For more information on the ProPublica’s Machine Bias project, we encourage to check out the following references.
Argument by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner
Counterargument by Sam Corbett-Davies, Emma Pierson, Avi Feller and Sharad Goel