Andrea Zecca
• Samuele Marro
• Stefano Colamonaco
sommario
Large Language Models (LLMs) are finding increasingly widespread applications, from generating
creative text formats to informing decision-making processes. However, these powerful tools inherit
the biases and stereotypes present within the data they are trained on. This can lead to the generation
of texts containing harmful stereotypes, particularly with regard to gender representation. This paper
investigates the presence of gender biases and stereotypes within LLMs with an emphasis on the use
of prompting techniques to try to limit this issue. This work contributes to the development of fairer
and more inclusive AI.
prodotti