STT-GS: Sample-Then-Transmit Edge Gaussian Splatting with Joint Client Selection and Power Control
- URL: http://arxiv.org/abs/2510.13186v1
- Date: Wed, 15 Oct 2025 06:20:47 GMT
- Title: STT-GS: Sample-Then-Transmit Edge Gaussian Splatting with Joint Client Selection and Power Control
- Authors: Zhen Li, Xibin Jin, Guoliang Li, Shuai Wang, Miaowen Wen, Huseyin Arslan, Derrick Wing Kwan Ng, Chengzhong Xu,
- Abstract summary: Edge Gaussian splatting (EGS) aggregates data from distributed clients and trains a global GS model at the edge server.<n>This paper formulates a novel GS-oriented objective function that distinguishes the view contributions of different clients.<n>It is found that the GS-oriented objective can be accurately predicted with low sampling ratios.
- Score: 77.56170394100022
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Edge Gaussian splatting (EGS), which aggregates data from distributed clients and trains a global GS model at the edge server, is an emerging paradigm for scene reconstruction. Unlike traditional edge resource management methods that emphasize communication throughput or general-purpose learning performance, EGS explicitly aims to maximize the GS qualities, rendering existing approaches inapplicable. To address this problem, this paper formulates a novel GS-oriented objective function that distinguishes the heterogeneous view contributions of different clients. However, evaluating this function in turn requires clients' images, leading to a causality dilemma. To this end, this paper further proposes a sample-then-transmit EGS (or STT-GS for short) strategy, which first samples a subset of images as pilot data from each client for loss prediction. Based on the first-stage evaluation, communication resources are then prioritized towards more valuable clients. To achieve efficient sampling, a feature-domain clustering (FDC) scheme is proposed to select the most representative data and pilot transmission time minimization (PTTM) is adopted to reduce the pilot overhead.Subsequently, we develop a joint client selection and power control (JCSPC) framework to maximize the GS-oriented function under communication resource constraints. Despite the nonconvexity of the problem, we propose a low-complexity efficient solution based on the penalty alternating majorization minimization (PAMM) algorithm. Experiments unveil that the proposed scheme significantly outperforms existing benchmarks on real-world datasets. It is found that the GS-oriented objective can be accurately predicted with low sampling ratios (e.g.,10%), and our method achieves an excellent tradeoff between view contributions and communication costs.
Related papers
- Sporadic Gradient Tracking over Directed Graphs: A Theoretical Perspective on Decentralized Federated Learning [23.709425027235937]
Decentralized Federated Learning (DFL) enables clients with local data to collaborate in a peer-to-peer manner to train a generalized model.<n>In this paper, we unify two branches of work that have separately solved important challenges in DFL: (i) gradient tracking techniques for mitigating data heterogeneity and (ii) accounting for diverse availability of resources across clients.<n>We propose $textitSporadic Gradient Tracking$ ($texttSpod-GT$), the first DFL algorithm that incorporates these factors over general directed graphs by allowing (i) client-specific gradient computation frequencies and
arXiv Detail & Related papers (2026-01-31T15:58:36Z) - Closing the Generalization Gap in Parameter-efficient Federated Edge Learning [43.00634399799955]
Federated edge learning (FEEL) provides a promising foundation for artificial intelligence (AI)<n>limited and heterogeneous local datasets, as well as resource-constrained deployment, severely degrade both model generalization and resource utilization.<n>We propose a framework that jointly leverages model minimization and generalization selection to tackle such challenges.
arXiv Detail & Related papers (2025-11-28T15:34:09Z) - Towards Federated Clustering: A Client-wise Private Graph Aggregation Framework [57.04850867402913]
Federated clustering addresses the challenge of extracting patterns from decentralized, unlabeled data.<n>We propose Structural Privacy-Preserving Federated Graph Clustering (SPP-FGC), a novel algorithm that innovatively leverages local structural graphs as the primary medium for privacy-preserving knowledge sharing.<n>Our framework achieves state-of-the-art performance, improving clustering accuracy by up to 10% (NMI) over federated baselines while maintaining provable privacy guarantees.
arXiv Detail & Related papers (2025-11-14T03:05:22Z) - Edge Collaborative Gaussian Splatting with Integrated Rendering and Communication [69.23838350582764]
We present edge collaborative (ECO-GS) where each user can switch between a small GS model to guarantee fidelity and a remote large GS model to guarantee fidelity.<n>We propose integrated and communication (IRAC) which jointly optimize low-cost rendering status and edge power allocation.
arXiv Detail & Related papers (2025-10-26T15:33:29Z) - A Model-agnostic Strategy to Mitigate Embedding Degradation in Personalized Federated Recommendation [34.915843795521134]
We propose a novel model-agnostic strategy for FedRec to strengthen the personalized embedding utility.<n>PLGC is the first research in federated recommendation to alleviate the dimensional collapse issue.
arXiv Detail & Related papers (2025-08-27T06:03:52Z) - A Scalable Pretraining Framework for Link Prediction with Efficient Adaptation [16.82426251068573]
Link Prediction (LP) is a critical task in graph machine learning.<n>Existing methods face key challenges including limited supervision from sparse connectivity.<n>We explore pretraining as a solution to address these challenges.
arXiv Detail & Related papers (2025-08-06T17:10:31Z) - Invariant Federated Learning for Edge Intelligence: Mitigating Heterogeneity and Asynchrony via Exit Strategy and Invariant Penalty [10.54196990763149]
This paper provides an invariant federated learning system for resource-constrained edge intelligence.<n>It can mitigate the impact of heterogeneous and asynchrony via exit strategy and invariant penalty.<n>It shows our system can enhance In-Distribution performance and outperform the state-of-the-art algorithm in Out-Of-Distribution generalization.
arXiv Detail & Related papers (2025-03-08T10:47:27Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.<n>We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Towards Federated Low-Rank Adaptation of Language Models with Rank Heterogeneity [12.515874333424929]
We observe that heterogeneous ranks among clients lead to unstable performance.<n>Our analysis attributes this instability to the conventional zero-padding aggregation strategy.<n>We propose a replication-based padding strategy that better retains valuable information from clients with high-quality data.
arXiv Detail & Related papers (2024-06-25T11:49:33Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.