Vehicle Cameras Guide mmWave Beams: Approach and Real-World V2V
Demonstration
- URL: http://arxiv.org/abs/2308.10362v1
- Date: Sun, 20 Aug 2023 20:43:11 GMT
- Title: Vehicle Cameras Guide mmWave Beams: Approach and Real-World V2V
Demonstration
- Authors: Tawfik Osman, Gouranga Charan, and Ahmed Alkhateeb
- Abstract summary: Accurately aligning millimeter-wave (mmWave) and terahertz (THz) narrow beams is essential to satisfy reliability and high data rates of 5G and beyond wireless communication systems.
We develop a deep learning solution for V2V scenarios to predict future beams using images from a 360 camera attached to the vehicle.
- Score: 13.117333069558812
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurately aligning millimeter-wave (mmWave) and terahertz (THz) narrow beams
is essential to satisfy reliability and high data rates of 5G and beyond
wireless communication systems. However, achieving this objective is difficult,
especially in vehicle-to-vehicle (V2V) communication scenarios, where both
transmitter and receiver are constantly mobile. Recently, additional sensing
modalities, such as visual sensors, have attracted significant interest due to
their capability to provide accurate information about the wireless
environment. To that end, in this paper, we develop a deep learning solution
for V2V scenarios to predict future beams using images from a 360 camera
attached to the vehicle. The developed solution is evaluated on a real-world
multi-modal mmWave V2V communication dataset comprising co-existing 360 camera
and mmWave beam training data. The proposed vision-aided solution achieves
$\approx 85\%$ top-5 beam prediction accuracy while significantly reducing the
beam training overhead. This highlights the potential of utilizing vision for
enabling highly-mobile V2V communications.
Related papers
- DeepSense-V2V: A Vehicle-to-Vehicle Multi-Modal Sensing, Localization, and Communications Dataset [12.007501768974281]
This work presents the first large-scale multi-modal dataset for studying mmWave vehicle-to-vehicle communications.
The dataset contains vehicles driving during the day and night for 120 km in intercity and rural settings, with speeds up to 100 km per hour.
More than one million objects were detected across all images, from trucks to bicycles.
arXiv Detail & Related papers (2024-06-25T19:43:49Z) - Position Aware 60 GHz mmWave Beamforming for V2V Communications
Utilizing Deep Learning [2.4993733210446893]
This paper presents a deep learning-based solution on utilizing the vehicular position information for predicting the optimal beams having sufficient mmWave received powers.
The results show that the solution can achieve up to 84.58% of received power of link status on average.
arXiv Detail & Related papers (2024-02-02T09:30:27Z) - Vision meets mmWave Radar: 3D Object Perception Benchmark for Autonomous
Driving [30.456314610767667]
We introduce the CRUW3D dataset, including 66K synchronized and well-calibrated camera, radar, and LiDAR frames.
This kind of format can enable machine learning models to more reliable perception results after fusing the information or features between the camera and radar.
arXiv Detail & Related papers (2023-11-17T01:07:37Z) - V2V4Real: A Real-world Large-scale Dataset for Vehicle-to-Vehicle
Cooperative Perception [49.7212681947463]
Vehicle-to-Vehicle (V2V) cooperative perception system has great potential to revolutionize the autonomous driving industry.
We present V2V4Real, the first large-scale real-world multi-modal dataset for V2V perception.
Our dataset covers a driving area of 410 km, comprising 20K LiDAR frames, 40K RGB frames, 240K annotated 3D bounding boxes for 5 classes, and HDMaps.
arXiv Detail & Related papers (2023-03-14T02:49:20Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - Millimeter Wave Drones with Cameras: Computer Vision Aided Wireless Beam
Prediction [8.919072533905517]
Millimeter wave (mmWave) and terahertz (THz) drones have the potential to enable several futuristic applications.
These drones need to deploy large antenna arrays and use narrow directive beams to maintain a sufficient link budget.
This paper proposes a vision-aided machine learning-based approach that leverages visual data collected from cameras installed on the drones.
arXiv Detail & Related papers (2022-11-14T17:42:16Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Network-Aware 5G Edge Computing for Object Detection: Augmenting
Wearables to "See'' More, Farther and Faster [18.901994926291465]
This paper presents a detailed simulation and evaluation of 5G wireless offloading for object detection within a powerful, new smart wearable called VIS4ION.
The current VIS4ION system is an instrumented book-bag with high-resolution cameras, vision processing and haptic and audio feedback.
The paper considers uploading the camera data to a mobile edge cloud to perform real-time object detection and transmitting the detection results back to the wearable.
arXiv Detail & Related papers (2021-12-25T07:09:00Z) - A Comprehensive Overview on 5G-and-Beyond Networks with UAVs: From
Communications to Sensing and Intelligence [152.89360859658296]
5G networks need to support three typical usage scenarios, namely, enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC) and massive machine-type communications (mMTC)
On the one hand, UAVs can be leveraged as cost-effective aerial platforms to provide ground users with enhanced communication services by exploiting their high cruising altitude and controllable maneuverability in 3D space.
On the other hand, providing such communication services simultaneously for both UAV and ground users poses new challenges due to the need for ubiquitous 3D signal coverage as well as the strong air-ground network interference.
arXiv Detail & Related papers (2020-10-19T08:56:04Z) - V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and
Prediction [74.42961817119283]
We use vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles.
By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints.
arXiv Detail & Related papers (2020-08-17T17:58:26Z) - Applying Deep-Learning-Based Computer Vision to Wireless Communications:
Methodologies, Opportunities, and Challenges [100.45137961106069]
Deep learning (DL) has seen great success in the computer vision (CV) field.
This article introduces ideas about applying DL-based CV in wireless communications.
arXiv Detail & Related papers (2020-06-10T11:37:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.