|
|
While representing the de-facto framework for enabling distributed training of Machine Learning models, Federated Learning (FL) still suffers convergence issues when non-Independent and Identically Distributed (non-IID) data are considered. In this context, the local model optimisation on different data distributions generate dissimilar updates, which are difficult to aggregate and translate into sub-optimal convergence. To tackle this issues, we propose Peer-Reviewed Federated Learning (PRFL), an extension of the traditional FL training process inspired by the peer-review procedure common in the academic field, where model updates are reviewed by several other clients in the federation before being aggregated at the server-sidePRFL aims at enabling the identification of relevant updates, while disregarding the ineffective ones. We implement PRFL on top of the FlowerFL library, and make Peer-Reviewed Flower a publicly-available library for the modular implementation of any review-based FL algorithm. A preliminary case study on both regression and classification tasks highlights the potential of PRFL, showcasing how the distributed solution can achieve performance similar to that obtained by the corresponding centralised algorithm, even when non-IID data are considered.
hosting event
reference publication
funding project
FAIR-PE01-SP08 — Future AI Research – Partenariato Esteso sull'Intelligenza Artificiale – Spoke 8 “Pervasive AI”
(01/01/2023–31/12/2025)
EXPECTATION — Personalized Explainable Artificial Intelligence for decentralized agents with heterogeneous knowledge
(01/04/2021–31/03/2024)