Fairness in Recruitment Decisions Algorithms

Riccardo Marvasi  •  Edoardo Saturno  •  Lorenzo Balzani
abstract

With the increasing use of Artificial Intelligence (AI) in various industries, ensuring fairness in these systems has become a critical theme. This work investigates the biases introduced by hiring algorithms employed by large corporations to prescreen job candidates. These algorithms are increasingly used to streamline the hiring process, making it more efficient and cost-effective. However, despite their advantages, these AI systems can perpetuate and even exacerbate existing biases, leading to unfair discrimination against certain groups. Our research aims to identify specific biases within these prescreening algorithms and propose methods to mitigate them. In our study we will compare traditional ML technique together with more advanced neural network architectures. Our aim is to demonstrate the potential presence of unfairness both in the dataset and in the decision-making process of the algorithms, and the possible solutions that can be implemented to correct this behavior. Our findings highlight the importance of transparency, accountability, and continuous monitoring in the deployment of AI systems in hiring, ensuring that they promote equality and diversity in the workplace.

keywords
NN, unfariness metrics, explainability methods, unfariness in decision making
outcomes