Position Aided Beam Prediction in the Real World: How Useful GPS
Locations Actually Are?
- URL: http://arxiv.org/abs/2205.09054v2
- Date: Thu, 19 May 2022 22:25:27 GMT
- Title: Position Aided Beam Prediction in the Real World: How Useful GPS
Locations Actually Are?
- Authors: Jo\~ao Morais, Arash Behboodi, Hamed Pezeshki and Ahmed Alkhateeb
- Abstract summary: Millimeter-wave (mmWave) communication systems rely on narrow beams for achieving sufficient receive signal power. Adjusting these beams is typically associated with large training overhead.
We investigate position-aided beam prediction using a real-world large-scale dataset to derive insights into precisely how much overhead can be saved in practice.
- Score: 12.847571826603726
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Millimeter-wave (mmWave) communication systems rely on narrow beams for
achieving sufficient receive signal power. Adjusting these beams is typically
associated with large training overhead, which becomes particularly critical
for highly-mobile applications. Intuitively, since optimal beam selection can
benefit from the knowledge of the positions of communication terminals, there
has been increasing interest in leveraging position data to reduce the overhead
in mmWave beam prediction. Prior work, however, studied this problem using only
synthetic data that generally does not accurately represent real-world
measurements. In this paper, we investigate position-aided beam prediction
using a real-world large-scale dataset to derive insights into precisely how
much overhead can be saved in practice. Furthermore, we analyze which machine
learning algorithms perform best, what factors degrade inference performance in
real data, and which machine learning metrics are more meaningful in capturing
the actual communication system performance.
Related papers
- Beam Prediction based on Large Language Models [51.45077318268427]
Millimeter-wave (mmWave) communication is promising for next-generation wireless networks but suffers from significant path loss.
Traditional deep learning models, such as long short-term memory (LSTM), enhance beam tracking accuracy however are limited by poor robustness and generalization.
In this letter, we use large language models (LLMs) to improve the robustness of beam prediction.
arXiv Detail & Related papers (2024-08-16T12:40:01Z) - Scale-Translation Equivariant Network for Oceanic Internal Solitary Wave Localization [7.444865250744234]
Internal solitary waves (ISWs) are gravity waves that are often observed in the interior ocean rather than the surface.
Cloud cover in optical remote sensing images variably obscures ground information, leading to blurred or missing surface observations.
This paper aims at altimeter-based machine learning solutions to automatically locate ISWs.
arXiv Detail & Related papers (2024-06-18T21:09:56Z) - Near-field Beam training for Extremely Large-scale MIMO Based on Deep Learning [20.67122533341949]
We propose a near-field beam training method based on deep learning.
We use a convolutional neural network (CNN) to efficiently learn channel characteristics from historical data.
The proposed scheme achieves a more stable beamforming gain and significantly improves performance compared to the traditional beam training method.
arXiv Detail & Related papers (2024-06-05T13:26:25Z) - SPOT: Scalable 3D Pre-training via Occupancy Prediction for Learning Transferable 3D Representations [76.45009891152178]
Pretraining-finetuning approach can alleviate the labeling burden by fine-tuning a pre-trained backbone across various downstream datasets as well as tasks.
We show, for the first time, that general representations learning can be achieved through the task of occupancy prediction.
Our findings will facilitate the understanding of LiDAR points and pave the way for future advancements in LiDAR pre-training.
arXiv Detail & Related papers (2023-09-19T11:13:01Z) - Camera Based mmWave Beam Prediction: Towards Multi-Candidate Real-World
Scenarios [15.287380309115399]
This paper extensively investigates the sensing-aided beam prediction problem in a real-world vehicle-to-infrastructure (V2I) scenario.
In particular, this paper proposes to utilize visual and positional data to predict the optimal beam indices.
The proposed solutions are evaluated on the large-scale real-world DeepSense $6$G dataset.
arXiv Detail & Related papers (2023-08-14T00:15:01Z) - Towards Scale Consistent Monocular Visual Odometry by Learning from the
Virtual World [83.36195426897768]
We propose VRVO, a novel framework for retrieving the absolute scale from virtual data.
We first train a scale-aware disparity network using both monocular real images and stereo virtual data.
The resulting scale-consistent disparities are then integrated with a direct VO system.
arXiv Detail & Related papers (2022-03-11T01:51:54Z) - Understanding Memorization from the Perspective of Optimization via
Efficient Influence Estimation [54.899751055620904]
We study the phenomenon of memorization with turn-over dropout, an efficient method to estimate influence and memorization, for data with true labels (real data) and data with random labels (random data)
Our main findings are: (i) For both real data and random data, the optimization of easy examples (e.g., real data) and difficult examples (e.g., random data) are conducted by the network simultaneously, with easy ones at a higher speed; (ii) For real data, a correct difficult example in the training dataset is more informative than an easy one.
arXiv Detail & Related papers (2021-12-16T11:34:23Z) - A Novel Look at LIDAR-aided Data-driven mmWave Beam Selection [24.711393214172148]
We propose a lightweight neural network (NN) architecture along with the corresponding LIDAR preprocessing.
Our NN-based beam selection scheme can achieve 79.9% throughput without any beam search overhead and 95% by searching among as few as 6 beams.
arXiv Detail & Related papers (2021-04-29T18:07:31Z) - Applying Deep-Learning-Based Computer Vision to Wireless Communications:
Methodologies, Opportunities, and Challenges [100.45137961106069]
Deep learning (DL) has seen great success in the computer vision (CV) field.
This article introduces ideas about applying DL-based CV in wireless communications.
arXiv Detail & Related papers (2020-06-10T11:37:49Z) - Deflating Dataset Bias Using Synthetic Data Augmentation [8.509201763744246]
State-of-the-art methods for most vision tasks for Autonomous Vehicles (AVs) rely on supervised learning.
The goal of this paper is to investigate the use of targeted synthetic data augmentation for filling gaps in real datasets for vision tasks.
Empirical studies on three different computer vision tasks of practical use to AVs consistently show that having synthetic data in the training mix provides a significant boost in cross-dataset generalization performance.
arXiv Detail & Related papers (2020-04-28T21:56:10Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.