Roberta Calegari, Andrea Aler Tubella, Virginia Dignum,
Michela Milano (a cura di)
Journal of Artificial Intelligence
2024
We invited scholars to submit their original research on fairness and bias in AI for consideration in this special track. The track's primary focus is to highlight the importance of responsible and human-centered approaches to addressing these issues. This special track includes a curated selection of papers in extended form from the 1st AEQUITAS Workshop on Fairness and Bias in AI, held in Kraków in October 2023, in conjunction with ECAI 2023.
AI-based decision support systems are increasingly deployed in industry, in the public and private sectors, and in policymaking to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, medical diagnosis, and crime prediction. As our society is facing a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems from amplifying this phenomenon and rather employ AI to mitigate it. As we use automated decision support systems to formalize, scale, and accelerate processes, we have the opportunity, as well as the duty, to revisit the existing processes for the better, avoiding perpetuating existing patterns of injustice, by detecting, diagnosing and repairing them. To trust these systems, domain experts and stakeholders need to trust the decisions. Despite the increased amount of work in this area in the last few years, we still lack a comprehensive understanding of how pertinent concepts of bias or discrimination should be interpreted in the context of AI and which socio-technical options to combat bias and discrimination are both realistically possible and normatively justified.
parole chiave
AI Fairness, AI Bias