Detecting and Mitigating Bias in AI-Powered Loan Approval Systems

Eleasar Erz
abstract

The increasing use of AI algorithms in financial services has raised urgent concerns about
algorithmic fairness, especially in credit risk assessment and loan approval. These
systems can unintentionally perpetuate historical biases, disadvantaging protected groups,
including women, older applicants, and ethnic minorities. In this project, we will
investigate and mitigate such biases in AI-powered loan approval models using the
German Credit Dataset from the UCI Machine Learning Repository. This dataset is a
widely used benchmark for fairness and discrimination analysis.
We will use this dataset to train standard classification models, such as Logistic
Regression, Random Forest, and Gradient Boosting, and apply fairness auditing tools,
like the IBM AI Fairness 360 (AIF360) toolkit. This will enable us to quantify disparate
impacts across sensitive features (e.g., age and sex) and evaluate the effectiveness of bias
mitigation strategies.

outcomes