HawkRover: An Autonomous mmWave Vehicular Communication Testbed with
Multi-sensor Fusion and Deep Learning
- URL: http://arxiv.org/abs/2401.01822v2
- Date: Thu, 4 Jan 2024 14:28:02 GMT
- Title: HawkRover: An Autonomous mmWave Vehicular Communication Testbed with
Multi-sensor Fusion and Deep Learning
- Authors: Ethan Zhu, Haijian Sun, Mingyue Ji
- Abstract summary: Connected and automated vehicles (CAVs) have become a transformative technology that can change our daily life.
Currently, millimeter-wave (mmWave) bands are identified as the promising CAV connectivity solution.
While it can provide high data rate, their realization faces many challenges such as high attenuation during mmWave signal propagation and mobility management.
This study proposes an autonomous and low-cost testbed to collect extensive co-located mmWave signal and other sensors data to facilitate mmWave vehicular communications.
- Score: 26.133092114053472
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Connected and automated vehicles (CAVs) have become a transformative
technology that can change our daily life. Currently, millimeter-wave (mmWave)
bands are identified as the promising CAV connectivity solution. While it can
provide high data rate, their realization faces many challenges such as high
attenuation during mmWave signal propagation and mobility management. Existing
solution has to initiate pilot signal to measure channel information, then
apply signal processing to calculate the best narrow beam towards the receiver
end to guarantee sufficient signal power. This process takes significant
overhead and time, hence not suitable for vehicles. In this study, we propose
an autonomous and low-cost testbed to collect extensive co-located mmWave
signal and other sensors data such as LiDAR (Light Detection and Ranging),
cameras, ultrasonic, etc, traditionally for ``automated'', to facilitate mmWave
vehicular communications. Intuitively, these sensors can build a 3D map around
the vehicle and signal propagation path can be estimated, eliminating iterative
the process via pilot signals. This multimodal data fusion, together with AI,
is expected to bring significant advances in ``connected'' research.
Related papers
- ViT LoS V2X: Vision Transformers for Environment-aware LoS Blockage Prediction for 6G Vehicular Networks [20.953587995374168]
We propose a Deep Learning-based approach that combines Convolutional Neural Networks (CNNs) and customized Vision Transformers (ViTs)
Our method capitalizes on the synergistic strengths of CNNs and ViTs to extract features from time-series multimodal data.
Our results show that the proposed approach achieves high accuracy and outperforms state-of-the-art solutions, achieving more than $95%$ accurate predictions.
arXiv Detail & Related papers (2024-06-27T01:38:09Z) - DeepSense-V2V: A Vehicle-to-Vehicle Multi-Modal Sensing, Localization, and Communications Dataset [12.007501768974281]
This work presents the first large-scale multi-modal dataset for studying mmWave vehicle-to-vehicle communications.
The dataset contains vehicles driving during the day and night for 120 km in intercity and rural settings, with speeds up to 100 km per hour.
More than one million objects were detected across all images, from trucks to bicycles.
arXiv Detail & Related papers (2024-06-25T19:43:49Z) - Spatial Channel State Information Prediction with Generative AI: Towards
Holographic Communication and Digital Radio Twin [23.09171064957228]
6G promises to deliver faster and more reliable wireless connections via cutting-edge radio technologies.
Traditional management methods are mainly reactive, usually based on feedback from users to adapt to the dynamic wireless channel.
Advances in hardware and neural networks make it possible to predict such spatial-CSI using precise environmental information.
We propose a new framework, digital radio twin, which takes advantages from both the digital world and deterministic control over radio waves.
arXiv Detail & Related papers (2024-01-16T00:29:05Z) - Multimodal Transformers for Wireless Communications: A Case Study in
Beam Prediction [7.727175654790777]
We present a multimodal transformer deep learning framework for sensing-assisted beam prediction.
We employ a convolutional neural network to extract the features from a sequence of images, point clouds, and radar raw data sampled over time.
Experimental results show that our solution trained on image and GPS data produces the best distance-based accuracy of predicted beams at 78.44%.
arXiv Detail & Related papers (2023-09-21T06:29:38Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Multi-task Learning for Radar Signal Characterisation [48.265859815346985]
This paper presents an approach for tackling radar signal classification and characterisation as a multi-task learning (MTL) problem.
We propose the IQ Signal Transformer (IQST) among several reference architectures that allow for simultaneous optimisation of multiple regression and classification tasks.
We demonstrate the performance of our proposed MTL model on a synthetic radar dataset, while also providing a first-of-its-kind benchmark for radar signal characterisation.
arXiv Detail & Related papers (2023-06-19T12:01:28Z) - Deep Reinforcement Learning for Interference Management in UAV-based 3D
Networks: Potentials and Challenges [137.47736805685457]
We show that interference can still be effectively mitigated even without knowing its channel information.
By harnessing interference, the proposed solutions enable the continued growth of civilian UAVs.
arXiv Detail & Related papers (2023-05-11T18:06:46Z) - HUM3DIL: Semi-supervised Multi-modal 3D Human Pose Estimation for
Autonomous Driving [95.42203932627102]
3D human pose estimation is an emerging technology, which can enable the autonomous vehicle to perceive and understand the subtle and complex behaviors of pedestrians.
Our method efficiently makes use of these complementary signals, in a semi-supervised fashion and outperforms existing methods with a large margin.
Specifically, we embed LiDAR points into pixel-aligned multi-modal features, which we pass through a sequence of Transformer refinement stages.
arXiv Detail & Related papers (2022-12-15T11:15:14Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - Distributional Reinforcement Learning for mmWave Communications with
Intelligent Reflectors on a UAV [119.97450366894718]
A novel communication framework that uses an unmanned aerial vehicle (UAV)-carried intelligent reflector (IR) is proposed.
In order to maximize the downlink sum-rate, the optimal precoding matrix (at the base station) and reflection coefficient (at the IR) are jointly derived.
arXiv Detail & Related papers (2020-11-03T16:50:37Z) - Estimating the Magnitude and Phase of Automotive Radar Signals under
Multiple Interference Sources with Fully Convolutional Networks [22.081568892330996]
Radar sensors are gradually becoming a wide-spread equipment for road vehicles, playing a crucial role in autonomous driving and road safety.
The broad adoption of radar sensors increases the chance of interference among sensors from different vehicles, generating corrupted range profiles and range-Doppler maps.
In this paper, we propose a fully convolutional neural network for automotive radar interference mitigation.
arXiv Detail & Related papers (2020-08-11T18:50:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.