Giacomo Gaiani
• Salvatore Guarrera
• Riccardo Monaco
abstract
The project explores the potential biases exhibited by GPT-4o in the context of CV screen-
ing, focusing on how different prompting techniques influence its decision-making process. As AI
systems like GPT-4o are increasingly integrated into recruitment workflows, ensuring fairness and
minimizing bias are critical for ethical deployment. By designing a series of controlled experiments,
we evaluate how various prompt phrasings, structures, and levels of specificity impact the outcomes
of CV evaluations. The study analyzes whether GPT-4o demonstrates preferences based on demo-
graphic attributes (e.g., gender, ethnicity, or age) or professional characteristics (e.g., education
or experience) depending on the framing of the input. The results aim to highlight best prac-
tices for prompt engineering to reduce bias and provide insights into ethical considerations when
using generative AI in sensitive decision-making processes. This work contributes to a broader
understanding of AI transparency and accountability in recruitment.
outcomes