Developing and Analyzing Fairness in Algorithmic Policing

   page       attach   
Thomas Guizzetti
abstract

Law enforcement agencies in the United States use predictive policing algorithms to analyze past crime data and identify high-risk areas where officers are directed to patrol during each shift. This project aims to develop a predictive algorithm using a US criminological dataset and to identify and mitigate biases that may disproportionately affect marginalized communities. The focus is on evaluating the model’s performance and fairness both before and after implementing bias mitigation techniques. By addressing these biases, the goal is to understand how a more equitable crime prediction model that provides unbiased insights and predictions can be developed, ultimately contributing to fairer decision-making processes in algorithmic policing.

outcomes