Erfan Samieyan Sahneh
• Jana Nikolovska
abstract
This project investigates the bias resilience of Multimodal Large Language Models (MLLMs)
when interpreting visually ambiguous images, using a car crash dataset as the evaluation basis. By
using carefully selected ambiguous images alongside a controlled set of neutral and biased prompts,
we aim to assess both inherent biases from pretraining and susceptibility to suggestion-based bias.
The findings will provide insights into the reliability, robustness, and ethical vulnerabilities of
selected MLLMs in this context.
outcomes