Elisabet Bolarín-Miró
abstract
This article examines the ethical and legal implications of profiling bias that disproportionately affects neurodivergent individuals when interacting with advanced generative AI systems. It argues that such bias does not merely arise from data representation or processing, but from profiling mechanisms structurally embedded within model architectures. A transversal analysis demonstrates that the same systemic biases are present across the three major model families identified to date: Large Language Models (LLMs), Large Multimodal Models (LMMs), and Large Retrieval Models (LRMs).
The study focuses specifically on interactions with neurodivergent users—including individuals with ADHD, dyslexia, or autism spectrum conditions—through tools such as ChatGPT. While these technologies are often promoted as accessibility-enhancing, they remain trained predominantly on neurotypical communication patterns. This structural configuration creates barriers leading to cognitive exclusion, semantic misinterpretation, and sensory overstimulation (Çarık et al., 2021), generating subjective experiences of anxiety and tension, particularly for users who process information in non-linear ways.
The analysis draws on recent research addressing uncertainty in multimodal interaction and the limitations of generative AI in non-normative contexts (Tang et al., 2023), and situates these findings within the European regulatory framework. Relevant provisions of the AI Act (Regulation 2024/1689, Arts. 5 and 27), the GDPR (Regulation 2016/679, Arts. 9 and 22), and the Charter of Fundamental Rights of the EU (Arts. 7, 8, 21) are examined as legal safeguards against structural exclusion.
The article concludes by proposing a normative and design-oriented framework for inclusive AI, calling for an explicit reform of profiling logic as a prerequisite for trustworthy and human-centred systems that safeguard neurodiverse populations.
keywords
Neurodiversity; Profiling Bias; Cognitive Accessibility; Large Language Models (LLMs); Large Multimodal Models (LMMs); Large Retrieval Models (LRMs); Algorithmic Discrimination; AI Act; GDPR; ECHR; European Accessibility Act; Human-Centred AI.
outcomes