Ethics of AI 2023/2024
Main | Projects |
29
completed projects
Achieving Fairness without Demographic Information
— Alessandro Pasi
• Matteo Belletti
• Razvan Ciprian Stricescu
Breaking and Fixing: A Study on Neural Network Vulnerabilities and Robustness in the medical field
— Alessandro Folloni
• Daniele Napolitano
• Marco Solime
Comparing Different Models’ Reliance on Prohibited Features in the Adult Census Income
— Daniele Baiocco
Developing a Fairness Auditing Tool for Income Prediction Models
— Mahmut Kaan Molla
• Farhad Bayrami
Developing and Analyzing Fairness in Algorithmic Policing
— Thomas Guizzetti
Explainability of Large Language Models (LLMs) for Anemia Diagnosis
— Elisa Castagnari
Explaining Convolutional Neural Networks: A Literature Overview and Implementation of Explainability Methods
— Maxence Murat
• Parsa Mastouri Kashani
• Sadeghi Khiabanian
Exploring Toxic and Hate Bias in Large Language Models
— Álvaro Esteban Muñoz
Fairness in Recidivism Detection Using Autoencoders and Gaussian Mixture Models
— Luca Trambaiollo
• Davide Capacchione
Fairness of predictive policing as a dynamic environment
— Edoardo Merli
Fortifying Medical Diagnosis: Federated Learning Augmented by Blockchain for Enhanced Privacy, Security, and Robustness
— Lorenzo Cassano
• Jacopo D’Abramo
GAUNTLET: An Explainable Model for Detecting AI-Generated Images
— Angelo Galavotti
• Lorenzo Galfano
Identifying and Mitigating Biases in Recruitment
— Mattia Maranzana
Machine Ethics: approaches and architectures
— Elisa Venturoli
Measuring fairness in automatic skin disease diagnosis with a convolutional neural network
— Chiara Bellatreccia
Prompting Techniques for Gender Equity in Open-Source LLMs
— Andrea Zecca
• Samuele Marro
• Stefano Colamonaco
Study, Design, and Implement the Knowledge-aware Object Detection Symbolic Knowledge Injection Algorithm
— Pelinsu Acar
• Rubin Carkaxhia
• Calin Diaconu
Study, design, and implement the [Lyrics] symbolic knowledge injection algorithm
— alessio pellegrino
To encode or not to encode? Explainable approaches for Emotion and Trigger detection on MELD
— Umberto Carlucci
• Giuseppe Carrino
• Matteo Vannucchi
User Persona for LLMs’ bias evaluation
— Luca De Dominicis
• Marco Lorenzo Damiani Ferretti
• Marco Panarelli
14
ongoing projects
Bias Mitigation in AI Models for Cardiovascular Diseases Prediction
— Giorgia Castelli
• Alice Fratini
• Madalina Ionela Mone
Bias mitigation in automated loan eligibility process
— Chiara Angileri
• Niccol`o Marzi
• Shola Oshodi
Evaluating AI Models with the ETHICS Dataset for Ethical Alignment
— Gianluca Di Mauro
• Leonardo Monti
Framework for Symbolic Rules Constraints in Neural Networks
— Silje Eriksen
LIME vs SHAP Comparing ML explanation techniques
— Andrea Terenziani
Study, design, and implement the GRAM symbolic knowledge injection algorithm
— Davide Freddi
• Davide Gardenal
Unmasking Bias in AI-Driven CV Screening: Evaluating Prompt Engineering Techniques to Enhance Fairness in GPT-4o Decision-Making
— Giacomo Gaiani
• Salvatore Guarrera
• Riccardo Monaco
Unveiling Political Bias in Artificial Intelligence: A Systematic Literature Review
— Giacomo Caroli
• Luca Mongiello