Radio Foundation Models: Pre-training Transformers for 5G-based Indoor Localization
- URL: http://arxiv.org/abs/2410.00617v1
- Date: Tue, 1 Oct 2024 12:03:32 GMT
- Title: Radio Foundation Models: Pre-training Transformers for 5G-based Indoor Localization
- Authors: Jonathan Ott, Jonas Pirkl, Maximilian Stahlke, Tobias Feigl, Christopher Mutschler,
- Abstract summary: We propose a self-supervised learning framework that pre-trains a general transformer (TF) neural network on 5G channel measurements without expensive equipment.
Our novel pretext task randomly masks and drops input information to learn to reconstruct it.
It implicitly learnstemporal patterns and information of the propagation environment that enable FP-based localization.
- Score: 3.2805385616712677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI)-based radio fingerprinting (FP) outperforms classic localization methods in propagation environments with strong multipath effects. However, the model and data orchestration of FP are time-consuming and costly, as it requires many reference positions and extensive measurement campaigns for each environment. Instead, modern unsupervised and self-supervised learning schemes require less reference data for localization, but either their accuracy is low or they require additional sensor information, rendering them impractical. In this paper we propose a self-supervised learning framework that pre-trains a general transformer (TF) neural network on 5G channel measurements that we collect on-the-fly without expensive equipment. Our novel pretext task randomly masks and drops input information to learn to reconstruct it. So, it implicitly learns the spatiotemporal patterns and information of the propagation environment that enable FP-based localization. Most interestingly, when we optimize this pre-trained model for localization in a given environment, it achieves the accuracy of state-of-the-art methods but requires ten times less reference data and significantly reduces the time from training to operation.
Related papers
- Finetuning Pre-trained Model with Limited Data for LiDAR-based 3D Object Detection by Bridging Domain Gaps [8.897884780881535]
LiDAR-based 3D object detectors often fail to adapt well to target domains with different sensor configurations.
Recent studies suggest that pre-trained backbones can be learned in a self-supervised manner with large-scale unlabeled LiDAR frames.
We propose a novel method, called Domain Adaptive Distill-Tuning (DADT), to adapt a pre-trained model with limited target data.
arXiv Detail & Related papers (2024-10-02T08:22:42Z) - A Variational Auto-Encoder Enabled Multi-Band Channel Prediction Scheme
for Indoor Localization [11.222977249913411]
We provide a scheme to improve the accuracy of indoor fingerprint localization from the frequency domain.
We tested our proposed scheme on COST 2100 simulation data and real time frequency division multiplexing (OFDM) WiFi data collected from an office scenario.
arXiv Detail & Related papers (2023-09-19T08:19:34Z) - Federated Learning for 5G Base Station Traffic Forecasting [0.0]
We investigate the efficacy of distributed learning applied to raw base station LTE data for time-series forecasting.
Our results show that the learning architectures adapted to the federated setting yield equivalent prediction error to the centralized setting.
In addition, preprocessing techniques on base stations enhance forecasting accuracy, while advanced federated aggregators do not surpass simpler approaches.
arXiv Detail & Related papers (2022-11-28T11:03:29Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - LocUNet: Fast Urban Positioning Using Radio Maps and Deep Learning [59.17191114000146]
LocUNet: A deep learning method for localization, based merely on Received Signal Strength (RSS) from Base Stations (BSs)
In the proposed method, the user to be localized reports the RSS from BSs to a Central Processing Unit ( CPU) which may be located in the cloud.
Using estimated pathloss radio maps of the BSs, LocUNet can localize users with state-of-the-art accuracy and enjoys high robustness to inaccuracies in the radio maps.
arXiv Detail & Related papers (2022-02-01T20:27:46Z) - Transfer learning to improve streamflow forecasts in data sparse regions [0.0]
We study the methodology behind Transfer Learning (TL) through fine-tuning and parameter transferring for better generalization performance of streamflow prediction in data-sparse regions.
We propose a standard recurrent neural network in the form of Long Short-Term Memory (LSTM) to fit on a sufficiently large source domain dataset.
We present a methodology to implement transfer learning approaches for hydrologic applications by separating the spatial and temporal components of the model and training the model to generalize.
arXiv Detail & Related papers (2021-12-06T14:52:53Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - Wireless Localisation in WiFi using Novel Deep Architectures [4.541069830146568]
This paper studies the indoor localisation of WiFi devices based on a commodity chipset and standard channel sounding.
We present a novel shallow neural network (SNN) in which features are extracted from the channel state information corresponding to WiFi subcarriers received on different antennas.
arXiv Detail & Related papers (2020-10-16T22:48:29Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Understanding Self-Training for Gradual Domain Adaptation [107.37869221297687]
We consider gradual domain adaptation, where the goal is to adapt an initial classifier trained on a source domain given only unlabeled data that shifts gradually in distribution towards a target domain.
We prove the first non-vacuous upper bound on the error of self-training with gradual shifts, under settings where directly adapting to the target domain can result in unbounded error.
The theoretical analysis leads to algorithmic insights, highlighting that regularization and label sharpening are essential even when we have infinite data, and suggesting that self-training works particularly well for shifts with small Wasserstein-infinity distance.
arXiv Detail & Related papers (2020-02-26T08:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.