Preventing harm to the rare in combating the malicious: A filtering-and-voting framework with adaptive aggregation in federated learning

Publisher:
ELSEVIER
Publication Type:
Journal Article
Citation:
Neurocomputing, 2024, 604
Issue Date:
2024-11-01
Full metadata record
The distributed nature of Federated Learning (FL) introduces security vulnerabilities and issues related to the heterogeneous distribution of data. Traditional FL aggregation algorithms often mitigate security risks by excluding outliers, which compromises the diversity of shared information. In this paper, we introduce a novel filtering-and-voting framework that adeptly navigates the challenges posed by non-iid training data and malicious attacks on FL. The proposed framework integrates a filtering layer for defensive measures against the intrusion of malicious models and a voting layer to harness valuable contributions from diverse participants. Moreover, by employing Deep Reinforcement Learning (DRL) for dynamic aggregation weight adjustment, we ensure the optimized aggregation of participant data, enhancing the diversity of information used for aggregation and improving the performance of the global model. Experimental results demonstrate that the proposed framework presents superior accuracy over traditional and contemporary FL aggregation methods as diverse models are utilized. It also shows robust resistance against malicious poisoning attacks.
Please use this identifier to cite or link to this item: