Radar-based Materials Classification Using Deep Wavelet Scattering
Transform: A Comparison of Centimeter vs. Millimeter Wave Units
- URL: http://arxiv.org/abs/2202.05169v1
- Date: Tue, 8 Feb 2022 02:07:14 GMT
- Title: Radar-based Materials Classification Using Deep Wavelet Scattering
Transform: A Comparison of Centimeter vs. Millimeter Wave Units
- Authors: Rami N. Khushaba (The University of Sydney), Andrew J. Hill (The
University of Sydney)
- Abstract summary: This research considers two radar units with different frequency ranges: Walabot-3D (6.3-8 GHz) cm-wave and IMAGEVK-74 (62-69 GHz) mm-wave imaging units by Vayyar Imaging.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radar-based materials detection received significant attention in recent
years for its potential inclusion in consumer and industrial applications like
object recognition for grasping and manufacturing quality assurance and
control. Several radar publications were developed for material classification
under controlled settings with specific materials' properties and shapes.
Recent literature has challenged the earlier findings on radars-based materials
classification claiming that earlier solutions are not easily scaled to
industrial applications due to a variety of real-world issues. Published
experiments on the impact of these factors on the robustness of the extracted
radar-based traditional features have already demonstrated that the application
of deep neural networks can mitigate, to some extent, the impact to produce a
viable solution. However, previous studies lacked an investigation of the
usefulness of lower frequency radar units, specifically <10GHz, against the
higher range units around and above 60GHz. This research considers two radar
units with different frequency ranges: Walabot-3D (6.3-8 GHz) cm-wave and
IMAGEVK-74 (62-69 GHz) mm-wave imaging units by Vayyar Imaging. A comparison is
presented on the applicability of each unit for material classification. This
work extends upon previous efforts, by applying deep wavelet scattering
transform for the identification of different materials based on the reflected
signals. In the wavelet scattering feature extractor, data is propagated
through a series of wavelet transforms, nonlinearities, and averaging to
produce low-variance representations of the reflected radar signals. This work
is unique in comparison of the radar units and algorithms in material
classification and includes real-time demonstrations that show strong
performance by both units, with increased robustness offered by the cm-wave
radar unit.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Towards Dense and Accurate Radar Perception Via Efficient Cross-Modal Diffusion Model [4.269423698485249]
This paper proposes a novel approach to dense and accurate mmWave radar point cloud construction via cross-modal learning.
Specifically, we introduce diffusion models, which possess state-of-the-art performance in generative modeling, to predict LiDAR-like point clouds from paired raw radar data.
We validate the proposed method through extensive benchmark comparisons and real-world experiments, demonstrating its superior performance and generalization ability.
arXiv Detail & Related papers (2024-03-13T12:20:20Z) - On Target Detection by Quantum Radar (Preprint) [1.0878040851637998]
Noise Radar and Quantum Radar exploit randomness of transmitted signal to enhance radar covertness and to reduce mutual interference.
Various Quantum Radar proposals cannot lead to any useful result, especially, but not limited to, the alleged detection of stealth targets.
arXiv Detail & Related papers (2024-02-29T18:58:40Z) - Multi-stage Learning for Radar Pulse Activity Segmentation [51.781832424705094]
Radio signal recognition is a crucial function in electronic warfare.
Precise identification and localisation of radar pulse activities are required by electronic warfare systems.
Deep learning-based radar pulse activity recognition methods have remained largely underexplored.
arXiv Detail & Related papers (2023-12-15T01:56:27Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC
Radar Signals [0.0]
Object detection utilizing Frequency Modulated Continous Wave radar is becoming increasingly popular in the field of autonomous systems.
Radar does not possess the same drawbacks seen by other emission-based sensors such as LiDAR, primarily the degradation or loss of return signals due to weather conditions such as rain or snow.
We introduce hierarchical Swin Vision transformers to the field of radar object detection and show their capability to operate on inputs varying in pre-processing, along with different radar configurations.
arXiv Detail & Related papers (2023-03-29T18:04:19Z) - mm-Wave Radar Hand Shape Classification Using Deformable Transformers [0.46007387171990594]
A novel, real-time, mm-Wave radar-based static hand shape classification algorithm and implementation are proposed.
The method finds several applications in low cost and privacy sensitive touchless control technology using 60 Ghz radar as the sensor input.
arXiv Detail & Related papers (2022-10-24T09:56:11Z) - Human Behavior Recognition Method Based on CEEMD-ES Radar Selection [12.335803365712277]
millimeter-wave radar to identify human behavior has been widely used in medical,security, and other fields.
Processing multiple radar data also requires a lot of time and computational cost.
The Complementary Ensemble Empirical Mode Decomposition-Energy Slice (CEEMD-ES) multistatic radar selection method is proposed to solve these problems.
Experiments show that this method can effectively select the radar, and the recognition rate of three kinds of human actions is 98.53%.
arXiv Detail & Related papers (2022-06-06T16:01:06Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.