Improve individual fairness in federated learning via adversarial training

Publisher:
Elsevier
Publication Type:
Journal Article
Citation:
Computers and Security, 2023, 132, pp. 103336
Issue Date:
2023-09-01
Filename Description Size
Imporve individual fairness.pdfPublished version974.59 kB
Adobe PDF
Full metadata record
Federated learning (FL) has been widely investigated these years. Since FL will be universally applied in the real world, the fairness issue involved is worthy of attention, while there are few relevant studies. Unlike previous work on group fairness in FL or fairness in centralized machine learning, this paper firstly considers both privacy and individual fairness and proposes promoting individual fairness in FL through distributionally adversarial training without violating data privacy. Specifically, we assume a model satisfying individual fairness as one robust to certain sensitive perturbations, which aligns with the goal of adversarial training. Then we transform the task of training an individually fair FL model into an adversarial training task. To obey the FL requirement of keeping data on clients privately, we execute the adversarial training task on the client side distributionally. Extensive experimental results on two real datasets collectively demonstrate the effectiveness of our proposed method, which not only improves individual fairness significantly but improves group fairness at the same time.
Please use this identifier to cite or link to this item: