Human Behavior Recognition Method Based on CEEMD-ES Radar Selection
- URL: http://arxiv.org/abs/2206.02705v1
- Date: Mon, 6 Jun 2022 16:01:06 GMT
- Title: Human Behavior Recognition Method Based on CEEMD-ES Radar Selection
- Authors: Zhaolin Zhang, Mingqi Song, Wugang Meng, Yuhan Liu, Fengcong Li, Xiang
Feng, Yinan Zhao
- Abstract summary: millimeter-wave radar to identify human behavior has been widely used in medical,security, and other fields.
Processing multiple radar data also requires a lot of time and computational cost.
The Complementary Ensemble Empirical Mode Decomposition-Energy Slice (CEEMD-ES) multistatic radar selection method is proposed to solve these problems.
Experiments show that this method can effectively select the radar, and the recognition rate of three kinds of human actions is 98.53%.
- Score: 12.335803365712277
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, the millimeter-wave radar to identify human behavior has
been widely used in medical,security, and other fields. When multiple radars
are performing detection tasks, the validity of the features contained in each
radar is difficult to guarantee. In addition, processing multiple radar data
also requires a lot of time and computational cost. The Complementary Ensemble
Empirical Mode Decomposition-Energy Slice (CEEMD-ES) multistatic radar
selection method is proposed to solve these problems. First, this method
decomposes and reconstructs the radar signal according to the difference in the
reflected echo frequency between the limbs and the trunk of the human body.
Then, the radar is selected according to the difference between the ratio of
echo energy of limbs and trunk and the theoretical value. The time domain,
frequency domain and various entropy features of the selected radar are
extracted. Finally, the Extreme Learning Machine (ELM) recognition model of the
ReLu core is established. Experiments show that this method can effectively
select the radar, and the recognition rate of three kinds of human actions is
98.53%.
Related papers
- radarODE: An ODE-Embedded Deep Learning Model for Contactless ECG Reconstruction from Millimeter-Wave Radar [16.52097542165782]
A novel deep learning framework called radarODE is designed to fuse the temporal and morphological features extracted from radar signals and generate ECG.
radarODE achieves better performance compared with the benchmark in terms of missed detection rate, root mean square error, Pearson correlation coefficient with the improvement of 9%, 16% and 19%, respectively.
arXiv Detail & Related papers (2024-08-03T06:07:15Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - RadarDistill: Boosting Radar-based Object Detection Performance via Knowledge Distillation from LiDAR Features [15.686167262542297]
RadarDistill is a knowledge distillation (KD) method which can improve the representation of radar data by leveraging LiDAR data.
RadarDistill successfully transfers desirable characteristics of LiDAR features into radar features using three key components.
Our comparative analyses conducted on the nuScenes datasets demonstrate that RadarDistill achieves state-of-the-art (SOTA) performance for radar-only object detection task.
arXiv Detail & Related papers (2024-03-08T05:15:48Z) - Multi-stage Learning for Radar Pulse Activity Segmentation [51.781832424705094]
Radio signal recognition is a crucial function in electronic warfare.
Precise identification and localisation of radar pulse activities are required by electronic warfare systems.
Deep learning-based radar pulse activity recognition methods have remained largely underexplored.
arXiv Detail & Related papers (2023-12-15T01:56:27Z) - RadarLCD: Learnable Radar-based Loop Closure Detection Pipeline [4.09225917049674]
This research introduces RadarLCD, a novel supervised deep learning pipeline specifically designed for Loop Closure Detection.
RadarLCD makes a significant contribution by leveraging the pre-trained HERO (Hybrid Estimation Radar Odometry) model.
The methodology undergoes evaluation across a variety of FMCW Radar dataset scenes.
arXiv Detail & Related papers (2023-09-13T17:10:23Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Radar-based Materials Classification Using Deep Wavelet Scattering
Transform: A Comparison of Centimeter vs. Millimeter Wave Units [0.0]
This research considers two radar units with different frequency ranges: Walabot-3D (6.3-8 GHz) cm-wave and IMAGEVK-74 (62-69 GHz) mm-wave imaging units by Vayyar Imaging.
arXiv Detail & Related papers (2022-02-08T02:07:14Z) - Constrained Contextual Bandit Learning for Adaptive Radar Waveform
Selection [14.796960833031724]
A sequential decision process in which an adaptive radar system repeatedly interacts with a finite-state target channel is studied.
The radar is capable of passively sensing the spectrum at regular intervals, which provides side information for the waveform selection process.
It is shown that the waveform selection problem can be effectively addressed using a linear contextual bandit formulation.
arXiv Detail & Related papers (2021-03-09T16:43:50Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.