Controlling update distance and enhancing fair trainable prototypes in federated learning under data and model heterogeneity
Heterogeneous federated learning(Ht FL) has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units. The prototype-based Ht FL methods were proposed to reduce the high communication cost of transmitting model parameters. These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy. However, existing prototype learning approaches fail to take the data distribution of clients into consideration, which results in suboptimal global prototype learning and insufficient client model personalization capabilities. To address these issues, we propose a fair trainable prototype federated learning(Fed FTP) algorithm, which employs a fair sampling training prototype(FSTP) mechanism and a hyperbolic space constraints(HSC) mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments. Furthermore, a local prototype stable update(LPSU) mechanism is proposed as a means of maintaining personalization while promoting global consistency, based on contrastive learning. Comprehensive experimental results demonstrate that Fed FTP achieves state-of-the-art performance in Ht FL scenarios.