Alessandro Folloni
• Daniele Napolitano
• Marco Solime
abstract
Computer vision models aim to extract useful information from images or videos but can be easily deceived by adversarial attacks at various stages of the pipeline. This is especially concerning in critical domains such as medical image classification. In this paper, we explore the vulnerabilities of a medical image classification model to adversarial attacks, including data and model poisoning, inversion and extraction attacks, and the fast gradient sign method. By conducting these attacks, we highlight potential weaknesses in the model. We then apply various techniques to enhance the model’s robustness and reliability, comparing the model’s performance before and after these interventions.
outcomes