Blockchain-Based Gradient Inversion and Poisoning Defense for Federated Learning

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
Citation:
IEEE Internet of Things Journal, 2023, PP, (99), pp. 1-1
Issue Date:
2023-01-01
Full metadata record
Federated learning has emerged as a promising privacy-preserving machine learning technology, enabling multiple clients to collaboratively train a global model without sharing raw data. With the increasing adoption of federated learning in Internet of Things (IoT) scenarios, concerns about security and privacy have become critical. In particular, gradient inversion attacks and poisoning attacks pose significant threats to the integrity and effectiveness of the global model. In response, we propose a comprehensive blockchain-based defense mechanism that effectively protects federated learning systems from such attacks. We develop a novel combination of techniques, including public blockchain level protection and private blockchain level protection, which work in tandem to prevent attackers from reconstructing figures using the obtained gradients. This unique combination of methods provides a robust defense against gradient inversion attacks in federated learning IoT scenarios. We conduct extensive experiments to validate the effectiveness of our proposed approach against gradient inversion and poisoning attacks. Our results demonstrate improved accuracy and stable convergence of training loss under poisoning attacks, indicating that our method can be applied to a wide range of federated learning IoT scenarios, enhancing both the security and privacy of distributed machine learning systems.
Please use this identifier to cite or link to this item: