Smile̸ = Happy: Auditing Emotion Recognition Systems for Bias Against Neurodiverse Individuals

Luwam Major Kefali  •  Hassen Said Ali  •  Hilina Fissha Woreta
sommario

Emotion recognition systems are increasingly deployed in real-world applications, including re-
cruitment, education, and healthcare. However, many such models rely heavily on neurotypical
standards of emotional expression, which can lead to biased or incorrect classifications when
applied to neurodivergent individuals—particularly those on the autism spectrum—who may
exhibit atypical facial affect, reduced eye contact, or lower expressivity [1].
This project aims to audit the fairness and inclusivity of facial emotion recognition models
with respect to neurodivergent expression profiles. We will combine a systematic literature re-
view on affective computing bias and neurodiverse expressivity [1, 3], with practical evaluation
of pre-trained emotion recognition models (e.g., FER+, DeepFace, Microsoft Azure Emotion
API) on standard datasets. Using explainability tools such as Grad-CAM [2], occlusion sensi-
tivity, and counterfactual analysis via LIME, we will uncover how these models make decisions
and whether their logic aligns with diverse expression styles.
Our core contribution is the development of a lightweight auditing protocol and fairness
checklist, designed to help developers identify potential exclusion risks in affective AI systems.
This work emphasizes ethical alignment, robustness, and inclusivity in emotion AI without
requiring potentially biased image manipulations.

prodotti