User Persona for LLMs’ bias evaluation

   page       attach   
Luca De Dominicis  •  Marco Lorenzo Damiani Ferretti  •  Marco Panarelli
sommario

This project is designed to develop a framework for comprehending and analyzing bias in Large Language Models utilizing various metrics. The central objective is to examine bias by creating user personas and assessing the responses of LLMs within different scenarios. This approach aims to quantify the degree and characteristics of bias in these responses. The project will also extend to include additional tasks sharing similar attributes, thereby providing a comprehensive analysis of bias. The ulti- mate goal is to develop additional tools to detect and measure bias within LLMs, thus informing users about the levels of fairness and reliability across diverse applications. Through rigorous testing and evaluation, this project aims to enhance awareness among users about biases in LLMs.

prodotti