Rino Falcone, Cristiano Castelfranchi, Alessandro Sapienza, Filippo Cantucci (a cura di)
WOA 2023 – 24th Workshop “From Objects to Agents”, pp. 49–65
CEUR Workshop Proceedings (AIxIA Series) 3579
Sun SITE Central Europe, RWTH Aachen University
novembre 2023
While representing the de-facto framework for enabling distributed training of Machine Learning models, Federated Learning (FL) still suffers convergence issues when non-Independent and Identically Distributed (non-IID) data are considered. In this context, the local model optimisation on different data distributions generate dissimilar updates, which are difficult to aggregate and translate into sub-optimal convergence. To tackle this issues, we propose Peer-Reviewed Federated Learning (PRFL), an extension of the traditional FL training process inspired by the peer-review procedure common in the academic field, where model updates are reviewed by several other clients in the federation before being aggregated at the server-side. PRFL aims at enabling the identification of relevant updates, while disregarding the ineffective ones. We implement PRFL on top of the Flower FL library, and make Peer-Reviewed Flower a publicly-available library for the modular implementation of any review-based FL algorithm. A preliminary case study on both regression and classification tasks highlights the potential of PRFL, showcasing how the distributed solution can achieve performance similar to that obtained by the corresponding centralised algorithm, even when non-IID data are considered.
parole chiave
Federated Learning • non-IID data • Peer-review
evento origine
rivista o collana
progetto finanziatore
FAIR-PE01-SP08 — Future AI Research – Partenariato Esteso sull'Intelligenza Artificiale – Spoke 8 “Pervasive AI”
(01/01/2023–31/12/2025)
EXPECTATION — Personalized Explainable Artificial Intelligence for decentralized agents with heterogeneous knowledge
(01/04/2021–31/03/2024)
funge da
pubblicazione di riferimento per presentazione