Motion Classification using Kinematically Sifted ACGAN-Synthesized Radar
Micro-Doppler Signatures
- URL: http://arxiv.org/abs/2001.08582v1
- Date: Sun, 19 Jan 2020 16:50:10 GMT
- Title: Motion Classification using Kinematically Sifted ACGAN-Synthesized Radar
Micro-Doppler Signatures
- Authors: Baris Erol, Sevgi Zubeyde Gurbuz, Moeness G. Amin
- Abstract summary: In this paper, an extended approach to adversarial learning is proposed for generation of synthetic radar micro-Doppler signatures.
The synthetic data is evaluated using visual interpretation, analysis of kinematic consistency, data diversity, dimensions of the latent space, and saliency maps.
A 19-layer deep convolutional neural network (DCNN) is trained to classify micro-Doppler signatures acquired from an environment different from that of the dataset supplied to the adversarial network.
- Score: 15.351282873821935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have recently received vast attention in
applications requiring classification of radar returns, including radar-based
human activity recognition for security, smart homes, assisted living, and
biomedicine. However,acquiring a sufficiently large training dataset remains a
daunting task due to the high human costs and resources required for radar data
collection. In this paper, an extended approach to adversarial learning is
proposed for generation of synthetic radar micro-Doppler signatures that are
well-adapted to different environments. The synthetic data is evaluated using
visual interpretation, analysis of kinematic consistency, data diversity,
dimensions of the latent space, and saliency maps. A principle-component
analysis (PCA) based kinematic-sifting algorithm is introduced to ensure that
synthetic signatures are consistent with physically possible human motions. The
synthetic dataset is used to train a 19-layer deep convolutional neural network
(DCNN) to classify micro-Doppler signatures acquired from an environment
different from that of the dataset supplied to the adversarial network. An
overall accuracy 93% is achieved on a dataset that contains multiple aspect
angles (0 deg., 30 deg., and 45 deg. as well as 60 deg.), with 9% improvement
as a result of kinematic sifting.
Related papers
- AN An ica-ensemble learning approach for prediction of uwb nlos signals
data classification [0.0]
This research focuses on harmonizing information through wireless communication and identifying individuals in NLOS scenarios using ultra-wideband radar signals.
Experiments demonstrate categorization accuracies of 88.37% for static data and 87.20% for dynamic data, highlighting the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-02-27T11:42:26Z) - Radar-Based Recognition of Static Hand Gestures in American Sign
Language [17.021656590925005]
This study explores the efficacy of synthetic data generated by an advanced radar ray-tracing simulator.
The simulator employs an intuitive material model that can be adjusted to introduce data diversity.
Despite exclusively training the NN on synthetic data, it demonstrates promising performance when put to the test with real measurement data.
arXiv Detail & Related papers (2024-02-20T08:19:30Z) - Physical-Layer Semantic-Aware Network for Zero-Shot Wireless Sensing [74.12670841657038]
Device-free wireless sensing has recently attracted significant interest due to its potential to support a wide range of immersive human-machine interactive applications.
Data heterogeneity in wireless signals and data privacy regulation of distributed sensing have been considered as the major challenges that hinder the wide applications of wireless sensing in large area networking systems.
We propose a novel zero-shot wireless sensing solution that allows models constructed in one or a limited number of locations to be directly transferred to other locations without any labeled data.
arXiv Detail & Related papers (2023-12-08T13:50:30Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Collaborative Learning with a Drone Orchestrator [79.75113006257872]
A swarm of intelligent wireless devices train a shared neural network model with the help of a drone.
The proposed framework achieves a significant speedup in training, leading to an average 24% and 87% saving in the drone hovering time.
arXiv Detail & Related papers (2023-03-03T23:46:25Z) - A Synthetic Dataset for 5G UAV Attacks Based on Observable Network
Parameters [3.468596481227013]
This paper presents the first synthetic dataset for Unmanned Aerial Vehicle (UAV) attacks in 5G and beyond networks.
The main objective of this data is to enable deep network development for UAV communication security.
The proposed dataset provides insights into network functionality when static or moving UAV attackers target authenticated UAVs in an urban environment.
arXiv Detail & Related papers (2022-11-05T15:12:51Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Differentiable Frequency-based Disentanglement for Aerial Video Action
Recognition [56.91538445510214]
We present a learning algorithm for human activity recognition in videos.
Our approach is designed for UAV videos, which are mainly acquired from obliquely placed dynamic cameras.
We conduct extensive experiments on the UAV Human dataset and the NEC Drone dataset.
arXiv Detail & Related papers (2022-09-15T22:16:52Z) - Object recognition for robotics from tactile time series data utilising
different neural network architectures [0.0]
This paper investigates the use of Convolutional Neural Networks (CNN) and Long-Short Term Memory (LSTM) neural network architectures for object classification on tactile data.
We compare these methods using data from two different fingertip sensors (namely the BioTac SP and WTS-FT) in the same physical setup.
The results show that the proposed method improves the maximum accuracy from 82.4% (BioTac SP fingertips) and 90.7% (WTS-FT fingertips) with complete time-series data to about 94% for both sensor types.
arXiv Detail & Related papers (2021-09-09T22:05:45Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.