Towards Explainable Personalisation for Federated Learning

Publication Type:
Thesis
Issue Date:
2024
Full metadata record
Modern machine learning, especially deep learning, depends on large datasets, but growing privacy concerns complicate data collection. Federated Learning (FL) offers a solution by enabling global model training without centralizing user data. Personalised Federated Learning (PerFL) further enhances FL by tailoring models to individual clients, yielding improved performance, though explainable personalisation remains a challenge. This research addresses these issues by recognizing client preferences and facilitating on-deployment personalisation for practical, interpretable model outputs. The research first introduces the Federated Dual Variational Autoencoder (FedDVA) framework to disentangle data representations into common (client-agnostic) and specific (client-personalised) components, enhancing interpretability. Additionally, the Client-Decorrelation Federated Learning (FedCD) framework creates a universal representation space across clients, identifying client properties through local data biases. This alignment clarifies how client-specific influences manifest in data. The study also presents Virtual Concepts (VCs), vectors that capture data partition structures and biases, making personalisation more explicit. Experiments on real-world datasets validate the effectiveness of these methods, demonstrating successful disentanglement of personalisation, aligned representations, and clustering that reflects client preferences. The global model effectively learns these preferences, achieving competitive performance compared to complex PerFL models without requiring specific adaptations.
Please use this identifier to cite or link to this item: