Adaptive Attention-Based Model for 5G Radio-based Outdoor Localization
- URL: http://arxiv.org/abs/2503.23810v1
- Date: Mon, 31 Mar 2025 07:44:14 GMT
- Title: Adaptive Attention-Based Model for 5G Radio-based Outdoor Localization
- Authors: Ilayda Yaman, Guoda Tian, Fredrik Tufvesson, Ove Edfors, Zhengya Zhang, Liang Liu,
- Abstract summary: We develop an adaptive localization framework that combines shallow attention-based models with a router/switching mechanism based on a single-layer perceptron (SLP)<n>This enables seamless transitions between specialized localization models optimized for different conditions, balancing accuracy, computational efficiency, and robustness to environmental variations.
- Score: 10.271380902604212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radio-based localization in dynamic environments, such as urban and vehicular settings, requires systems that can efficiently adapt to varying signal conditions and environmental changes. Factors such as multipath interference and obstructions introduce different levels of complexity that affect the accuracy of the localization. Although generalized models offer broad applicability, they often struggle to capture the nuances of specific environments, leading to suboptimal performance in real-world deployments. In contrast, specialized models can be tailored to particular conditions, enabling more precise localization by effectively handling domain-specific variations and noise patterns. However, deploying multiple specialized models requires an efficient mechanism to select the most appropriate one for a given scenario. In this work, we develop an adaptive localization framework that combines shallow attention-based models with a router/switching mechanism based on a single-layer perceptron (SLP). This enables seamless transitions between specialized localization models optimized for different conditions, balancing accuracy, computational efficiency, and robustness to environmental variations. We design three low-complex localization models tailored for distinct scenarios, optimized for reduced computational complexity, test time, and model size. The router dynamically selects the most suitable model based on real-time input characteristics. The proposed framework is validated using real-world vehicle localization data collected from a massive MIMO base station (BS), demonstrating its ability to seamlessly adapt to diverse deployment conditions while maintaining high localization accuracy.
Related papers
- GeneralizeFormer: Layer-Adaptive Model Generation across Test-Time Distribution Shifts [58.95913531746308]
We consider the problem of test-time domain generalization, where a model is trained on several source domains and adjusted on target domains never seen during training.<n>We propose to generate multiple layer parameters on the fly during inference by a lightweight meta-learned transformer, which we call textitGeneralizeFormer.
arXiv Detail & Related papers (2025-02-15T10:10:49Z) - Few-shot Steerable Alignment: Adapting Rewards and LLM Policies with Neural Processes [50.544186914115045]
Large language models (LLMs) are increasingly embedded in everyday applications.<n> Ensuring their alignment with the diverse preferences of individual users has become a critical challenge.<n>We present a novel framework for few-shot steerable alignment.
arXiv Detail & Related papers (2024-12-18T16:14:59Z) - DECODE: Domain-aware Continual Domain Expansion for Motion Prediction [10.479509360064219]
We introduce DECODE, a novel continual learning framework for motion prediction.
It balances specialization with generalization, dynamically adjusting to real-time demands.
It achieves a notably low forgetting rate of 0.044 and an average minADE of 0.584 m.
arXiv Detail & Related papers (2024-11-26T22:20:22Z) - SKADA-Bench: Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation On Diverse Modalities [55.87169702896249]
Unsupervised Domain Adaptation (DA) consists of adapting a model trained on a labeled source domain to perform well on an unlabeled target domain with some data distribution shift.<n>We present a complete and fair evaluation of existing shallow algorithms, including reweighting, mapping, and subspace alignment.<n>Our benchmark highlights the importance of realistic validation and provides practical guidance for real-life applications.
arXiv Detail & Related papers (2024-07-16T12:52:29Z) - Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference [47.460898983429374]
We introduce an ensemble Kalman filter (EnKF) into the non-mean-field (NMF) variational inference framework to approximate the posterior distribution of the latent states.
This novel marriage between EnKF and GPSSM not only eliminates the need for extensive parameterization in learning variational distributions, but also enables an interpretable, closed-form approximation of the evidence lower bound (ELBO)
We demonstrate that the resulting EnKF-aided online algorithm embodies a principled objective function by ensuring data-fitting accuracy while incorporating model regularizations to mitigate overfitting.
arXiv Detail & Related papers (2023-12-10T15:22:30Z) - Locally Adaptive and Differentiable Regression [10.194448186897906]
We propose a general framework to construct a global continuous and differentiable model based on a weighted average of locally learned models in corresponding local regions.
We demonstrate that when we mix kernel ridge and regression terms in the local models, and stitch them together continuously, we achieve faster statistical convergence in theory and improved performance in various practical settings.
arXiv Detail & Related papers (2023-08-14T19:12:40Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Adaptive Spot-Guided Transformer for Consistent Local Feature Matching [64.30749838423922]
We propose Adaptive Spot-Guided Transformer (ASTR) for local feature matching.
ASTR models the local consistency and scale variations in a unified coarse-to-fine architecture.
arXiv Detail & Related papers (2023-03-29T12:28:01Z) - Task-Adaptive Meta-Learning Framework for Advancing Spatial
Generalizability [5.448577641039577]
We propose a model-agnostic meta-learning framework that ensembles regionally heterogeneous data into location-sensitive meta tasks.
One major advantage of our proposed method is that it improves the model adaptation to a large number of heterogeneous tasks.
It also enhances the model generalization by automatically adapting the meta model of the corresponding difficulty level to any new tasks.
arXiv Detail & Related papers (2022-12-10T01:50:35Z) - Change Detection for Local Explainability in Evolving Data Streams [72.4816340552763]
Local feature attribution methods have become a popular technique for post-hoc and model-agnostic explanations.
It is often unclear how local attributions behave in realistic, constantly evolving settings such as streaming and online applications.
We present CDLEEDS, a flexible and model-agnostic framework for detecting local change and concept drift.
arXiv Detail & Related papers (2022-09-06T18:38:34Z) - PointFix: Learning to Fix Domain Bias for Robust Online Stereo
Adaptation [67.41325356479229]
We propose to incorporate an auxiliary point-selective network into a meta-learning framework, called PointFix.
In a nutshell, our auxiliary network learns to fix local variants intensively by effectively back-propagating local information through the meta-gradient.
This network is model-agnostic, so can be used in any kind of architectures in a plug-and-play manner.
arXiv Detail & Related papers (2022-07-27T07:48:29Z) - Slimmable Domain Adaptation [112.19652651687402]
We introduce a simple framework, Slimmable Domain Adaptation, to improve cross-domain generalization with a weight-sharing model bank.
Our framework surpasses other competing approaches by a very large margin on multiple benchmarks.
arXiv Detail & Related papers (2022-06-14T06:28:04Z) - Local-Adaptive Face Recognition via Graph-based Meta-Clustering and
Regularized Adaptation [21.08555249703121]
We introduce a new problem setup called Local-Adaptive Face Recognition (LaFR)
LaFR aims at getting optimal performance by training local-adapted models automatically and un-supervisely.
We show that LaFR can further improve the global model by a simple federated aggregation over the updated local models.
arXiv Detail & Related papers (2022-03-27T15:20:14Z) - Data Summarization via Bilevel Optimization [48.89977988203108]
A simple yet powerful approach is to operate on small subsets of data.
In this work, we propose a generic coreset framework that formulates the coreset selection as a cardinality-constrained bilevel optimization problem.
arXiv Detail & Related papers (2021-09-26T09:08:38Z) - Deep Variational Models for Collaborative Filtering-based Recommender
Systems [63.995130144110156]
Deep learning provides accurate collaborative filtering models to improve recommender system results.
Our proposed models apply the variational concept to injectity in the latent space of the deep architecture.
Results show the superiority of the proposed approach in scenarios where the variational enrichment exceeds the injected noise effect.
arXiv Detail & Related papers (2021-07-27T08:59:39Z) - Robust Federated Learning Through Representation Matching and Adaptive
Hyper-parameters [5.319361976450981]
Federated learning is a distributed, privacy-aware learning scenario which trains a single model on data belonging to several clients.
Current federated learning methods struggle in cases with heterogeneous client-side data distributions.
We propose a novel representation matching scheme that reduces the divergence of local models.
arXiv Detail & Related papers (2019-12-30T20:19:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.