Mahmut Kaan Molla
• Farhad Bayrami
abstract
With the increasing reliance on machine learning models for decision-making in critical areas
such as finance and employment, it is imperative to ensure that these models operate fairly and
equitably. Biases in ML models can lead to discriminatory practices, which are ethically and
legally unacceptable. This project aims to develop a fairness auditing tool that detects and
mitigates biases in income prediction models.
outcomes