Graph Theory Meets Federated Learning over Satellite Constellations: Spanning Aggregations, Network Formation, and Performance Optimization
- URL: http://arxiv.org/abs/2509.24932v2
- Date: Thu, 02 Oct 2025 21:56:54 GMT
- Title: Graph Theory Meets Federated Learning over Satellite Constellations: Spanning Aggregations, Network Formation, and Performance Optimization
- Authors: Fardis Nadimi, Payam Abdisarabshali, Jacob Chakareski, Nicholas Mastronarde, Seyyedali Hosseinalipour,
- Abstract summary: We introduce Fed-Span, a novel distributed learning framework for satellite networks.<n>Fed-Span aims to address challenges inherent to distributed learning in satellite networks.<n>We show that Fed-Span outperforms existing methods, with faster model convergence, greater energy efficiency, and reduced latency.
- Score: 12.126031297041235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Fed-Span, a novel federated/distributed learning framework designed for low Earth orbit satellite constellations. Fed-Span aims to address critical challenges inherent to distributed learning in dynamic satellite networks, including intermittent satellite connectivity, heterogeneous computational capabilities of satellites, and time-varying satellites' datasets. At its core, Fed-Span leverages minimum spanning tree (MST) and minimum spanning forest (MSF) topologies to introduce spanning model aggregation and dispatching processes for distributed learning. To formalize Fed-Span, we offer a fresh perspective on MST/MSF topologies by formulating them through a set of continuous constraint representations (CCRs), thereby devising graph-theoretical abstractions into an optimizable framework for satellite networks. Using these CCRs, we obtain the energy consumption and latency of operations in Fed-Span. Moreover, we derive novel convergence bounds for Fed-Span, accommodating its key system characteristics and degrees of freedom (i.e., tunable parameters). Finally, we propose a comprehensive optimization problem that jointly minimizes model prediction loss, energy consumption, and latency of Fed-Span. We unveil that this problem is NP-hard and develop a systematic approach to transform it into a geometric programming formulation, solved via successive convex optimization with performance guarantees. Through evaluations on real-world datasets, we demonstrate that Fed-Span outperforms existing methods, with faster model convergence, greater energy efficiency, and reduced latency. These results highlight Fed-Span as a novel solution for efficient distributed learning in satellite networks.
Related papers
- A Secure and Private Distributed Bayesian Federated Learning Design [56.92336577799572]
Distributed Federated Learning (DFL) enables decentralized model training across large-scale systems without a central parameter server.<n>DFL faces three critical challenges: privacy leakage from honest-but-curious neighbors, slow convergence due to the lack of central coordination, and vulnerability to Byzantine adversaries aiming to degrade model accuracy.<n>We propose a novel DFL framework that integrates Byzantine robustness, privacy preservation, and convergence acceleration.
arXiv Detail & Related papers (2026-02-23T16:12:02Z) - OptiVote: Non-Coherent FSO Over-the-Air Majority Vote for Communication-Efficient Distributed Federated Learning in Space Data Centers [68.73273027298625]
megaconstellations are driving the long-term vision of space data centers (SDCs)<n>AirComp is an in-network aggregation framework for learning free-space (FSO)<n>AirVote integrates sign gradient (SGD) with a majority-signposition modulation (PPM), where each satellite conveys local gradient by activating PPM time slots.<n>OptiVote mitigates phase-sensitive field superposition into phase-agnostic optical intensity combining.
arXiv Detail & Related papers (2025-12-30T16:40:02Z) - Joint AoI and Handover Optimization in Space-Air-Ground Integrated Network [48.485907216785904]
Low Earth orbit (LEO) satellite constellations offer promising solutions with global coverage and reduced latency.<n>Yet struggle with intermittent coverage and intermittent communication windows due to orbital dynamics.<n>Our three-layer design employs hybrid free-space optical (FSO) links for high-capacity satellite-to-ground communication and reliable radio frequency (RF) links for HAP-to-ground transmission.
arXiv Detail & Related papers (2025-09-16T06:16:56Z) - Accelerating Privacy-Preserving Federated Learning in Large-Scale LEO Satellite Systems [57.692181589325116]
Large-scale low-Earth-orbit (LEO) satellite systems are increasingly valued for their ability to enable rapid and wide-area data exchange.<n>Due to privacy concerns and regulatory constraints, raw data collected at remote clients cannot be centrally aggregated.<n>Federated learning offers a privacy-preserving alternative by training local models on distributed devices and exchanging only model parameters.<n>We propose a discrete temporal graph-based on-demand scheduling framework that dynamically allocates communication resources to accelerate federated learning.
arXiv Detail & Related papers (2025-09-05T03:33:42Z) - Decentralized Nonconvex Composite Federated Learning with Gradient Tracking and Momentum [78.27945336558987]
Decentralized server (DFL) eliminates reliance on client-client architecture.<n>Non-smooth regularization is often incorporated into machine learning tasks.<n>We propose a novel novel DNCFL algorithm to solve these problems.
arXiv Detail & Related papers (2025-04-17T08:32:25Z) - Fed-KAN: Federated Learning with Kolmogorov-Arnold Networks for Traffic Prediction [10.34834816497689]
Traditional centralized learning approaches face major challenges in such networks due to high latency, intermittent connectivity and limited bandwidth.<n>Existing FL models, such as Federated Learning with Multi-Layer Perceptrons (Fed-MLP), can struggle with high computational complexity and poor adaptability to dynamic environments.<n>This paper provides a detailed analysis for Federated Learning with Kolmogorov-Arnold Networks (Fed-KAN)<n>Our results show that Fed-KAN can achieve a 77.39% reduction in average test loss compared to Fed-MLP, highlighting its improved performance and better generalization ability.
arXiv Detail & Related papers (2025-02-28T20:04:53Z) - FedMeld: A Model-dispersal Federated Learning Framework for Space-ground Integrated Networks [29.49615352723995]
Space-ground integrated networks (SGINs) are expected to deliver artificial intelligence (AI) services to every corner of the world.<n>One mission of SGINs is to support federated learning (FL) at a global scale.<n>We propose an infrastructure-free federated learning framework based on a model dispersal (FedMeld) strategy.
arXiv Detail & Related papers (2024-12-23T02:58:12Z) - A Distance Similarity-based Genetic Optimization Algorithm for Satellite Ground Network Planning Considering Feeding Mode [53.71516191515285]
The low transmission efficiency of the satellite data relay back mission has become a problem that is currently constraining the construction of the system.
We propose a distance similarity-based genetic optimization algorithm (DSGA), which considers the state characteristics between the tasks and introduces a weighted Euclidean distance method to determine the similarity between the tasks.
arXiv Detail & Related papers (2024-08-29T06:57:45Z) - Satellite Federated Edge Learning: Architecture Design and Convergence Analysis [47.057886812985984]
This paper introduces a novel FEEL algorithm, named FEDMEGA, tailored to mega-constellation networks.
By integrating inter-satellite links (ISL) for intra-orbit model aggregation, the proposed algorithm significantly reduces the usage of low data rate and intermittent GSL.
Our proposed method includes a ring all-reduce based intra-orbit aggregation mechanism, coupled with a network flow-based transmission scheme for global model aggregation.
arXiv Detail & Related papers (2024-04-02T11:59:58Z) - Asymmetrically Decentralized Federated Learning [22.21977974314497]
Decentralized Federated Learning (DFL) has emerged, which discards the server with a peer-to-peer (P2P) communication framework.
This paper proposes DFedSGPSM algorithm, which is based on asymmetric topologies and utilizes the Push- Aware protocol.
arXiv Detail & Related papers (2023-10-08T09:46:26Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Federated learning for LEO constellations via inter-HAP links [0.0]
Low Earth Obit (LEO) satellite constellations have seen a sharp increase of deployment in recent years.
To apply machine learning (ML) in such applications, the traditional way of downloading satellite data such as imagery to a ground station (GS) is not desirable.
We show that existing FL solutions do not fit well in such LEO constellation scenarios because of significant challenges such as excessive convergence delay and unreliable wireless channels.
arXiv Detail & Related papers (2022-05-15T08:22:52Z) - BScNets: Block Simplicial Complex Neural Networks [79.81654213581977]
Simplicial neural networks (SNN) have recently emerged as the newest direction in graph learning.
We present Block Simplicial Complex Neural Networks (BScNets) model for link prediction.
BScNets outperforms state-of-the-art models by a significant margin while maintaining low costs.
arXiv Detail & Related papers (2021-12-13T17:35:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.