Distributed Machine Learning Approach for Low-Latency Localization in Cell-Free Massive MIMO Systems
- URL: http://arxiv.org/abs/2507.14216v1
- Date: Wed, 16 Jul 2025 06:05:16 GMT
- Title: Distributed Machine Learning Approach for Low-Latency Localization in Cell-Free Massive MIMO Systems
- Authors: Manish Kumar, Tzu-Hsuan Chou, Byunghyun Lee, Nicolò Michelusi, David J. Love, Yaguang Zhang, James V. Krogmeier,
- Abstract summary: Low-latency localization is critical in cellular networks to support real-time applications requiring precise positioning.<n>We propose a distributed machine learning framework for fingerprint-based localization tailored to cell-free massive multiple-input multiple-output (MIMO) systems.
- Score: 16.842941544015194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-latency localization is critical in cellular networks to support real-time applications requiring precise positioning. In this paper, we propose a distributed machine learning (ML) framework for fingerprint-based localization tailored to cell-free massive multiple-input multiple-output (MIMO) systems, an emerging architecture for 6G networks. The proposed framework enables each access point (AP) to independently train a Gaussian process regression model using local angle-of-arrival and received signal strength fingerprints. These models provide probabilistic position estimates for the user equipment (UE), which are then fused by the UE with minimal computational overhead to derive a final location estimate. This decentralized approach eliminates the need for fronthaul communication between the APs and the central processing unit (CPU), thereby reducing latency. Additionally, distributing computational tasks across the APs alleviates the processing burden on the CPU compared to traditional centralized localization schemes. Simulation results demonstrate that the proposed distributed framework achieves localization accuracy comparable to centralized methods, despite lacking the benefits of centralized data aggregation. Moreover, it effectively reduces uncertainty of the location estimates, as evidenced by the 95\% covariance ellipse. The results highlight the potential of distributed ML for enabling low-latency, high-accuracy localization in future 6G networks.
Related papers
- Optimal Transport-based Domain Alignment as a Preprocessing Step for Federated Learning [0.48342038441006796]
Federated learning (FL) is a subfield of machine learning that avoids sharing local data with a central server.<n>In FL, fusing locally-trained models with unbalanced datasets may deteriorate the performance of global model aggregation.<n>We introduce an Optimal Transport-based preprocessing algorithm that aligns the datasets by minimizing the distributional discrepancy of data along the edge devices.
arXiv Detail & Related papers (2025-06-04T15:35:55Z) - Decentralised Resource Sharing in TinyML: Wireless Bilayer Gossip Parallel SGD for Collaborative Learning [2.6913398550088483]
This paper proposes a novel framework, bilayer Gossip Decentralised Parallel Descent (GDD)<n>GDD addresses intermittent connectivity, limited communication range, and dynamic network topologies.<n>We evaluate the framework's performance against the Centralised Federated Learning (CFL) baseline.
arXiv Detail & Related papers (2025-01-08T20:14:07Z) - Asynchronous Parallel Incremental Block-Coordinate Descent for
Decentralized Machine Learning [55.198301429316125]
Machine learning (ML) is a key technique for big-data-driven modelling and analysis of massive Internet of Things (IoT) based intelligent and ubiquitous computing.
For fast-increasing applications and data amounts, distributed learning is a promising emerging paradigm since it is often impractical or inefficient to share/aggregate data.
This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices.
arXiv Detail & Related papers (2022-02-07T15:04:15Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Auction Based Clustered Federated Learning in Mobile Edge Computing
System [13.710325615076687]
Federated learning is a distributed machine learning solution that uses local computing and local data to train the Artificial Intelligence (AI) model.
We propose a cluster-based clients selection method that can generate a federated virtual dataset that satisfies the global distribution.
We show that our proposed selection methods and auction-based federated learning can achieve better performance with the Convolutional Neural Network model (CNN) under different data distributions.
arXiv Detail & Related papers (2021-03-12T08:54:27Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Zero-Shot Multi-View Indoor Localization via Graph Location Networks [66.05980368549928]
indoor localization is a fundamental problem in location-based applications.
We propose a novel neural network based architecture Graph Location Networks (GLN) to perform infrastructure-free, multi-view image based indoor localization.
GLN makes location predictions based on robust location representations extracted from images through message-passing networks.
We introduce a novel zero-shot indoor localization setting and tackle it by extending the proposed GLN to a dedicated zero-shot version.
arXiv Detail & Related papers (2020-08-06T07:36:55Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z) - Federated Learning with Cooperating Devices: A Consensus Approach for
Massive IoT Networks [8.456633924613456]
Federated learning (FL) is emerging as a new paradigm to train machine learning models in distributed systems.
The paper proposes a fully distributed (or server-less) learning approach: the proposed FL algorithms leverage the cooperation of devices that perform data operations inside the network.
The approach lays the groundwork for integration of FL within 5G and beyond networks characterized by decentralized connectivity and computing.
arXiv Detail & Related papers (2019-12-27T15:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.