Survivorship Bias Mitigation in a Recidivism Prediction Tool
Survivorship bias is the fallacy of focusing on entities that survived a certain selection process and overlooking the entities that did not. This common form of bias can lead to wrong conclusions. AI Fairness 360 is an open-source toolkit that can detect and handle bias using several mitigation techniques. However, what if the bias in the dataset is not bias, but rather a justified unbalance? Bias mitigation while the “bias” is justified is undesirable, since it can have a serious negative impact on the performance of a prediction tool based on machine learning. In order to make well-informed product design decisions, it would be appealing to be able to run simulations of bias mitigation in several situations to explore its impact. This paper describes the first results in creating such a tool for a recidivism prediction tool. The main contribution is an indication of the challenges that come with the creation of such a simulation tool, specifically a realistic dataset.
Table of contents
- 1. Introduction
- 2. Bias and Recidivism Prediction
- 3. An Experiment
- 4. Results
- 5. Conclusions and Discussion
- 6. References
Loggen Sie sich bitte ein, um den ganzen Text zu lesen.
Es gibt noch keine Kommentare
Ihr Kommentar zu diesem Beitrag
AbonnentInnen dieser Zeitschrift können sich an der Diskussion beteiligen. Bitte loggen Sie sich ein, um Kommentare verfassen zu können.