Luca Trambaiollo
• Davide Capacchione
abstract
Predicting recidivism is critical in the criminal justice system to make informed decisions about sentencing and parole. The COMPAS dataset includes predictions about the likelihood of a person re-offending (recidivism), which can influence these decisions. However, classic machine learning models can inadvertently favor or discriminate against certain demographic groups, raising concerns about fairness. In this project, we use Autoencoders and Gaussian Mixture Models (GMM) on the COMPAS Recidivism dataset to identify and analyze potential biases. By implementing techniques such as adversarial debiasing and fairness constraint mitigation techniques, we reduce disparities between sensitive groups. Findings reveal the im- portance of integrating fairness techniques into machine learning pipelines to balance ethical considerations with predictive performance. The results highlight the need for transparency and accountability in deploying such models in high-stakes environments like criminal justice.
outcomes