Variational Gaussian Mixture Manifold Models for Client-Specific Federated Personalization
- URL: http://arxiv.org/abs/2509.10521v1
- Date: Thu, 04 Sep 2025 01:28:02 GMT
- Title: Variational Gaussian Mixture Manifold Models for Client-Specific Federated Personalization
- Authors: Sai Puppala, Ismail Hossain, Md Jahangir Alam, Sajedul Talukder,
- Abstract summary: VGM$2$ is a geometry-centric PFL framework that learns client-specific parametric UMAP embeddings.<n>Each client maintains a Dirichlet-Normal-Inverse-Gamma posterior over marker weights, means, and variances.<n>VGM$2$ achieves competitive or superior test F1 scores compared to strong baselines.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized federated learning (PFL) often fails under label skew and non-stationarity because a single global parameterization ignores client-specific geometry. We introduce VGM$^2$ (Variational Gaussian Mixture Manifold), a geometry-centric PFL framework that (i) learns client-specific parametric UMAP embeddings, (ii) models latent pairwise distances with mixture relation markers for same and different class pairs, and (iii) exchanges only variational, uncertainty-aware marker statistics. Each client maintains a Dirichlet-Normal-Inverse-Gamma (Dir-NIG) posterior over marker weights, means, and variances; the server aggregates via conjugate moment matching to form global priors that guide subsequent rounds. We prove that this aggregation minimizes the summed reverse Kullback-Leibler divergence from client posteriors within the conjugate family, yielding stability under heterogeneity. We further incorporate a calibration term for distance-to-similarity mapping and report communication and compute budgets. Across eight vision datasets with non-IID label shards, VGM$^2$ achieves competitive or superior test F1 scores compared to strong baselines while communicating only small geometry summaries. Privacy is strengthened through secure aggregation and optional differential privacy noise, and we provide a membership-inference stress test. Code and configurations will be released to ensure full reproducibility.
Related papers
- Handling Covariate Mismatch in Federated Linear Prediction [2.5782420501870296]
Federated learning enables institutions to train predictive models collaboratively without sharing raw data.<n>Most existing methods assume that all clients measure the same features.<n>We formalize learning a linear prediction under client-wise MCAR patterns and develop two modular approaches.
arXiv Detail & Related papers (2026-02-02T13:29:36Z) - Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Vision-Language Models [63.70401095689976]
We argue that replacing parameters with preferences represents a more scalable and privacy-preserving future.<n>We propose MoR, a federated alignment framework based on GRPO with Mixture-of-Rewards for heterogeneous VLMs.<n>MoR consistently outperforms federated alignment baselines in generalization, robustness, and cross-client adaptability.
arXiv Detail & Related papers (2026-01-31T03:11:51Z) - From Global to Granular: Revealing IQA Model Performance via Correlation Surface [83.65597122328133]
We present textbfGranularity-Modulated Correlation (GMC), which provides a structured, fine-grained analysis of IQA performance.<n>GMC includes a textbfDistribution Regulator that regularizes correlations to mitigate biases from non-uniform quality distributions.<n>Experiments on standard benchmarks show that GMC reveals performance characteristics invisible to scalar metrics, offering a more informative and reliable paradigm for analyzing, comparing, and deploying IQA models.
arXiv Detail & Related papers (2026-01-29T13:55:26Z) - FedSCAM (Federated Sharpness-Aware Minimization with Clustered Aggregation and Modulation): Scam-resistant SAM for Robust Federated Optimization in Heterogeneous Environments [0.0]
Federated Learning (FL) enables collaborative model training across decentralized edge devices while preserving data privacy.<n> statistical heterogeneity among clients, often manifested as non-IID label distributions, poses significant challenges to convergence and generalization.<n>We propose textbfFedSCAM (Federated Sharpness-Aware Minimization with Clustered Aggregation and Modulation), a novel algorithm that dynamically adjusts the SAM perturbation radius and aggregation weights based on client-specific heterogeneity scores.
arXiv Detail & Related papers (2025-12-29T19:42:50Z) - CO-PFL: Contribution-Oriented Personalized Federated Learning for Heterogeneous Networks [51.43780477302533]
Contribution-Oriented PFL (CO-PFL) is a novel algorithm that dynamically estimates each client's contribution for global aggregation.<n>CO-PFL consistently surpasses state-of-the-art methods in robustness in personalization accuracy, robustness, scalability and convergence stability.
arXiv Detail & Related papers (2025-10-23T05:10:06Z) - Aggregation on Learnable Manifolds for Asynchronous Federated Optimization [3.8208848658169763]
We introduce a geometric framework that casts aggregation as curve learning.<n>Within this, we propose AsyncBezier, which replaces linear aggregation with low-degree curvature components.<n>We show that these gains are preserved even when other methods are allocated a higher local compute budget.
arXiv Detail & Related papers (2025-03-18T16:36:59Z) - Collaborative and Efficient Personalization with Mixtures of Adaptors [5.195669033269619]
Federated Low-Rank Adaptive Learning (FLoRAL) allows clients to personalize in groups by mixing between low-rank adaptors.<n>FLoRAL is a model parameterization that casts personalized federated learning as a multi-task learning problem.
arXiv Detail & Related papers (2024-10-04T15:11:15Z) - Personalized federated learning based on feature fusion [2.943623084019036]
Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy.
We propose a personalized federated learning approach called pFedPM.
In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models.
arXiv Detail & Related papers (2024-06-24T12:16:51Z) - Federated Contrastive Learning for Personalized Semantic Communication [55.46383524190467]
We design a federated contrastive learning framework aimed at supporting personalized semantic communication.
FedCL enables collaborative training of local semantic encoders across multiple clients and a global semantic decoder owned by the base station.
To tackle the semantic imbalance issue arising from heterogeneous datasets across distributed clients, we employ contrastive learning to train a semantic centroid generator.
arXiv Detail & Related papers (2024-06-13T14:45:35Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - Privacy and Accuracy Implications of Model Complexity and Integration in Heterogeneous Federated Learning [8.842172558292027]
Federated Learning (FL) has been proposed as a privacy-preserving solution for distributed machine learning.<n>Recent studies have shown that it is susceptible to membership inference attacks (MIA), which can compromise the privacy of client data.
arXiv Detail & Related papers (2023-11-29T15:54:15Z) - SIGMA: Scale-Invariant Global Sparse Shape Matching [50.385414715675076]
We propose a novel mixed-integer programming (MIP) formulation for generating precise sparse correspondences for non-rigid shapes.
We show state-of-the-art results for sparse non-rigid matching on several challenging 3D datasets.
arXiv Detail & Related papers (2023-08-16T14:25:30Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Global Convergence of Federated Learning for Mixed Regression [17.8469597916875]
This paper studies the problem of model training under Federated Learning when clients exhibit cluster structure.
Key innovation in our analysis is a uniform estimate on clustering, which we prove by bounding the VC dimension by bounding the general concept classes.
arXiv Detail & Related papers (2022-06-15T03:38:42Z) - Federated Geometric Monte Carlo Clustering to Counter Non-IID Datasets [5.265938474748481]
Federated learning allows clients to collaboratively train models on datasets that cannot be exchanged because of their size or regulations.
Previous works tried to mitigate the effects of non-IID datasets on training accuracy, focusing mainly on non-IID labels.
We propose FedGMCC, a novel framework where a central server aggregates client models that it can cluster together.
arXiv Detail & Related papers (2022-04-23T08:23:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.