SPIRE: Conditional Personalization for Federated Diffusion Generative Models
- URL: http://arxiv.org/abs/2506.12303v1
- Date: Sat, 14 Jun 2025 01:40:31 GMT
- Title: SPIRE: Conditional Personalization for Federated Diffusion Generative Models
- Authors: Kaan Ozkara, Ruida Zhou, Suhas Diggavi,
- Abstract summary: Shared Backbone Personal Identity Representation Embeddings (SPIRE) is a framework that casts per client diffusion based generation as conditional generation in FL.<n>SPIRE factorizes the network into (i) a high capacity global backbone that learns a population level score function and (ii) lightweight, learnable client embeddings that encode local data statistics.<n>Our analysis also hints at how client embeddings act as biases that steer a shared score network toward personalized distributions.
- Score: 7.8583640700306585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in diffusion models have revolutionized generative AI, but their sheer size makes on device personalization, and thus effective federated learning (FL), infeasible. We propose Shared Backbone Personal Identity Representation Embeddings (SPIRE), a framework that casts per client diffusion based generation as conditional generation in FL. SPIRE factorizes the network into (i) a high capacity global backbone that learns a population level score function and (ii) lightweight, learnable client embeddings that encode local data statistics. This separation enables parameter efficient finetuning that touches $\leq 0.01\%$ of weights. We provide the first theoretical bridge between conditional diffusion training and maximum likelihood estimation in Gaussian mixture models. For a two component mixture we prove that gradient descent on the DDPM with respect to mixing weights loss recovers the optimal mixing weights and enjoys dimension free error bounds. Our analysis also hints at how client embeddings act as biases that steer a shared score network toward personalized distributions. Empirically, SPIRE matches or surpasses strong baselines during collaborative pretraining, and vastly outperforms them when adapting to unseen clients, reducing Kernel Inception Distance while updating only hundreds of parameters. SPIRE further mitigates catastrophic forgetting and remains robust across finetuning learning rate and epoch choices.
Related papers
- FedCAR: Cross-client Adaptive Re-weighting for Generative Models in Federated Learning [3.7088276910640365]
Federated learning is a privacy-preserving solution for training distributed datasets across data centers.<n>We propose a novel algorithm aimed at improving the performance of generative models within FL.<n> Experimental results on three public chest X-ray datasets show superior performance in medical image generation.
arXiv Detail & Related papers (2024-12-16T05:43:14Z) - Collaborative and Efficient Personalization with Mixtures of Adaptors [5.195669033269619]
Federated Low-Rank Adaptive Learning (FLoRAL) allows clients to personalize in groups by mixing between low-rank adaptors.<n>FLoRAL is a model parameterization that casts personalized federated learning as a multi-task learning problem.
arXiv Detail & Related papers (2024-10-04T15:11:15Z) - Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model inversion attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - Decentralized Directed Collaboration for Personalized Federated Learning [39.29794569421094]
We concentrate on the Decentralized Personalized Learning (DPFL) that performs distributed training model computation.
We propose a directed collaboration framework by incorporating textbfDecentralized textbfFederated textbfPartial textbfGradient textbfPedGP.
arXiv Detail & Related papers (2024-05-28T06:52:19Z) - FedSPU: Personalized Federated Learning for Resource-constrained Devices with Stochastic Parameter Update [0.27309692684728615]
Federated Learning (PFL) is widely employed in IoT applications to handle high-volume, non-iid client data.<n>FedSPU outperforms Federated Dropout by 7.57% on average in terms of accuracy.<n>An introduced early stopping scheme leads to a significant reduction of the training time by 24.8%-70.4% while maintaining high accuracy.
arXiv Detail & Related papers (2024-03-18T04:31:38Z) - FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.