Point Cloud-based Proactive Link Quality Prediction for Millimeter-wave
Communications
- URL: http://arxiv.org/abs/2301.00752v4
- Date: Thu, 7 Dec 2023 16:42:32 GMT
- Title: Point Cloud-based Proactive Link Quality Prediction for Millimeter-wave
Communications
- Authors: Shoki Ohta, Takayuki Nishio, Riichi Kudo, Kahoko Takahashi, Hisashi
Nagata
- Abstract summary: This study proposes a point cloud-based method for mmWave link quality prediction.
Our proposed method can predict future large attenuation of mmWave received signal strength and throughput.
- Score: 2.559190942797394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study demonstrates the feasibility of point cloud-based proactive link
quality prediction for millimeter-wave (mmWave) communications. Previous
studies have proposed machine learning-based methods to predict received signal
strength for future time periods using time series of depth images to mitigate
the line-of-sight (LOS) path blockage by pedestrians in mmWave communication.
However, these image-based methods have limited applicability due to privacy
concerns as camera images may contain sensitive information. This study
proposes a point cloud-based method for mmWave link quality prediction and
demonstrates its feasibility through experiments. Point clouds represent
three-dimensional (3D) spaces as a set of points and are sparser and less
likely to contain sensitive information than camera images. Additionally, point
clouds provide 3D position and motion information, which is necessary for
understanding the radio propagation environment involving pedestrians. This
study designs the mmWave link quality prediction method and conducts realistic
indoor experiments, where the link quality fluctuates significantly due to
human blockage, using commercially available IEEE 802.11ad-based 60 GHz
wireless LAN devices and Kinect v2 RGB-D camera and Velodyne VLP-16 light
detection and ranging (LiDAR) for point cloud acquisition. The experimental
results showed that our proposed method can predict future large attenuation of
mmWave received signal strength and throughput induced by the LOS path blockage
by pedestrians with comparable or superior accuracy to image-based prediction
methods. Hence, our point cloud-based method can serve as a viable alternative
to image-based methods.
Related papers
- mmDEAR: mmWave Point Cloud Density Enhancement for Accurate Human Body Reconstruction [14.480271406960467]
We propose a two-stage deep learning framework that enhances mmWave point clouds and improves body reconstruction accuracy.
Our approach outperforms state-of-the-art methods, with the enhanced point clouds further improving performance when integrated into existing models.
arXiv Detail & Related papers (2025-03-04T08:03:53Z) - MITO: A Millimeter-Wave Dataset and Simulator for Non-Line-of-Sight Perception [4.794643874201285]
We present MITO, the first millimeter-wave (mmWave) dataset of diverse, everyday objects.
We generate 550 high-resolution mmWave images in line-of-sight and non-light-of-sight (NLOS), as well as RGB-D images, segmentation masks, and raw mmWave signals.
arXiv Detail & Related papers (2025-02-14T16:12:14Z) - bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - ProbRadarM3F: mmWave Radar based Human Skeletal Pose Estimation with Probability Map Guided Multi-Format Feature Fusion [14.83158440666821]
This paper introduces a probability map guided multi-format feature fusion model, ProbRadarM3F.
ProbRadarM3F fuses the traditional heatmap features and the positional features, then effectively achieves the estimation of 14 keypoints of the human body.
arXiv Detail & Related papers (2024-05-08T15:54:57Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - mm-Wave Radar Hand Shape Classification Using Deformable Transformers [0.46007387171990594]
A novel, real-time, mm-Wave radar-based static hand shape classification algorithm and implementation are proposed.
The method finds several applications in low cost and privacy sensitive touchless control technology using 60 Ghz radar as the sensor input.
arXiv Detail & Related papers (2022-10-24T09:56:11Z) - mmBody Benchmark: 3D Body Reconstruction Dataset and Analysis for
Millimeter Wave Radar [10.610455816814985]
Millimeter Wave (mmWave) Radar is gaining popularity as it can work in adverse environments like smoke, rain, snow, poor lighting, etc.
Prior work has explored the possibility of reconstructing 3D skeletons or meshes from the noisy and sparse mmWave Radar signals.
This dataset consists of synchronized and calibrated mmWave radar point clouds and RGB(D) images in different scenes and skeleton/mesh annotations for humans in the scenes.
arXiv Detail & Related papers (2022-09-12T08:00:31Z) - Evaluating Point Cloud from Moving Camera Videos: A No-Reference Metric [58.309735075960745]
This paper explores the way of dealing with point cloud quality assessment (PCQA) tasks via video quality assessment (VQA) methods.
We generate the captured videos by rotating the camera around the point clouds through several circular pathways.
We extract both spatial and temporal quality-aware features from the selected key frames and the video clips through using trainable 2D-CNN and pre-trained 3D-CNN models.
arXiv Detail & Related papers (2022-08-30T08:59:41Z) - mmPose-NLP: A Natural Language Processing Approach to Precise Skeletal
Pose Estimation using mmWave Radars [0.0]
This paper presents a novel Natural Language Processing (NLP) inspired Sequence-to-Sequence (Seq2Seq) skeletal key-point estimator using millimeter-wave (mmWave) radar data.
To the best of the author's knowledge, this is the first method to precisely estimate upto 25 skeletal key-points using mmWave radar data alone.
Skeletal pose estimation is critical in several applications ranging from autonomous vehicles, traffic monitoring, patient monitoring, gait analysis, to defense security forensics, and aid both preventative and actionable decision making.
arXiv Detail & Related papers (2021-07-21T19:45:17Z) - R-AGNO-RPN: A LIDAR-Camera Region Deep Network for Resolution-Agnostic
Detection [3.4761212729163313]
R-AGNO-RPN, a region proposal network built on fusion of 3D point clouds and RGB images is proposed.
Our approach is designed to be also applied on low point cloud resolutions.
arXiv Detail & Related papers (2020-12-10T15:22:58Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.