personalized federated learning (PFL) has been widely investigated to address
the challenge of data heterogeneity, especially when a single generic model is
inadequate in satisfying the diverse performance requir
本文提出了一种新的联邦虚拟学习方法,名为 Federated Virtual Learning on Heterogeneous Data with Local-Global Distillation (FEDLGD),通过使用本地和全局蒸馏创建一个更小的合成数据集,以训练联邦学习,并解决了数据异构性带来的问题。实验表明,该方法在具有非常有限数量的精制虚拟数据的情况下,优于当前领先的异构 FL 算法。