A Hybrid AI System for Adoption Eligibility Assessment with Explainable Feedback

Filippo Teodorani  •  Diego Rossi
sommario

This project investigates the development of a hybrid AI system designed to support the
evaluation of adoption eligibility. The system combines a supervised machine learn-
ing classifier trained on a simulated dataset of prospective adoptive applicants with
explainable AI (XAI) methods to highlight the most relevant factors influencing indi-
vidual predictions.
The dataset is synthetically generated but grounded in real-world constraints: both
the choice of variables and the statistical distributions are based on international adop-
tion standards, institutional guidelines, and relevant academic literature.
To promote transparency and user trust, the system incorporates a large language
model (LLM), which translates the XAI-generated insights into natural language expla-
nations that are understandable, neutral, and ethically framed. These explanations aim
to empower human decision-makers and applicants alike, enhancing human agency
and oversight in algorithmic judgments.
Beyond the prototype, our goal is to deliver a functional platform where users can
input their personal data corresponding to selected variables and receive a decision
outcome accompanied by an interpretable explanation generated by the LLM. This in-
teractive system would simulate real-world adoption scenarios while demonstrating the
potential of combining XAI and LLMs for transparent, user-centered decision support.

prodotti