GAUNTLET: An Explainable Model for Detecting AI-Generated Images

Angelo Galavotti  •  Lorenzo Galfano
abstract

As AI-based image generation continues to advance, distinguishing between human-crafted and AI-generated content is becoming increasingly challenging. This poses significant risks, as this content can be exploited in malicious contexts.

GAUNTLET tackles this issue by providing an explainable system for detecting AI-generated images. In this document, we illustrate the features of GAUNTLET and analyze its results, while comparing the different explainability tools used. In addition, we present the training techniques employed and explain how they allowed us to leverage the capabilities of the model.

To demonstrate its practical applicability, we developed a web application that utilizes this model, enabling users to upload images and receive detailed insights into whether the content is AI-generated. This use case underscores the potential of GAUNTLET in addressing real-world challenges.

outcomes