Unveiling Political Bias in Artificial Intelligence: A Systematic Literature Review

Giacomo Caroli  •  Luca Mongiello
abstract

Large Language Models (LLMs) have revolutionized the landscape
of natural language processing, finding applications across diverse
domains. However, the issue of political biases encoded within
these models—which can influence outputs and perpetuate ideolog-
ical leanings—is often underrated.
A recent study, ChatGPT vs. Google: A Comparative Study of
Search Performance and User Experience by Xu et al.[1], sheds light
on how users perceive and interact with LLM-powered chatbots com-
pared to traditional search engines. Their findings reveal that while
tools like ChatGPT enhance efficiency and user satisfaction by de-
livering concise and accessible information, they may inadvertently
encourage overreliance and fail to expose users to diverse perspec-
tives, particularly in tasks requiring fact-checking or critical evalu-
ation.
Building on these insights, this project aims to conduct a systematic
literature review (SLR) on political bias in LLMs, focusing on its
identification, measurement, and mitigation.

outcomes