On Input Formats for Radar Micro-Doppler Signature Processing by Convolutional Neural Networks
- URL: http://arxiv.org/abs/2404.08291v1
- Date: Fri, 12 Apr 2024 07:30:08 GMT
- Title: On Input Formats for Radar Micro-Doppler Signature Processing by Convolutional Neural Networks
- Authors: Mikolaj Czerkawski, Carmine Clemente, Craig Michie, Christos Tachtatzis,
- Abstract summary: The utility of the phase information, as well as the optimal format of the Doppler-time input for a convolutional neural network is analysed.
It is found that the performance achieved by convolutional neural network classifiers is heavily influenced by the type of input representation.
- Score: 1.2499537119440245
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks have often been proposed for processing radar Micro-Doppler signatures, most commonly with the goal of classifying the signals. The majority of works tend to disregard phase information from the complex time-frequency representation. Here, the utility of the phase information, as well as the optimal format of the Doppler-time input for a convolutional neural network, is analysed. It is found that the performance achieved by convolutional neural network classifiers is heavily influenced by the type of input representation, even across formats with equivalent information. Furthermore, it is demonstrated that the phase component of the Doppler-time representation contains rich information useful for classification and that unwrapping the phase in the temporal dimension can improve the results compared to a magnitude-only solution, improving accuracy from 0.920 to 0.938 on the tested human activity dataset. Further improvement of 0.947 is achieved by training a linear classifier on embeddings from multiple-formats.
Related papers
- Rolling bearing fault diagnosis method based on generative adversarial enhanced multi-scale convolutional neural network model [7.600902237804825]
A rolling bearing fault diagnosis method based on generative adversarial enhanced multi-scale convolutional neural network model is proposed.
Compared with ResNet method, the experimental results show that the proposed method has better generalization performance and anti-noise performance.
arXiv Detail & Related papers (2024-03-21T06:42:35Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Time Scale Network: A Shallow Neural Network For Time Series Data [18.46091267922322]
Time series data is often composed of information at multiple time scales.
Deep learning strategies exist to capture this information, but many make networks larger, require more data, are more demanding to compute, and are difficult to interpret.
We present a minimal, computationally efficient Time Scale Network combining the translation and dilation sequence used in discrete wavelet transforms with traditional convolutional neural networks and back-propagation.
arXiv Detail & Related papers (2023-11-10T16:39:55Z) - FFEINR: Flow Feature-Enhanced Implicit Neural Representation for
Spatio-temporal Super-Resolution [4.577685231084759]
This paper proposes a Feature-Enhanced Neural Implicit Representation (FFEINR) for super-resolution of flow field data.
It can take full advantage of the implicit neural representation in terms of model structure and sampling resolution.
The training process of FFEINR is facilitated by introducing feature enhancements for the input layer.
arXiv Detail & Related papers (2023-08-24T02:28:18Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Ensemble Augmentation for Deep Neural Networks Using 1-D Time Series
Vibration Data [0.0]
Time-series data are one of the fundamental types of raw data representation used in data-driven techniques.
Deep Neural Networks (DNNs) require huge labeled training samples to reach their optimum performance.
In this study, a data augmentation technique named ensemble augmentation is proposed to overcome this limitation.
arXiv Detail & Related papers (2021-08-06T20:04:29Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - Accurate Tumor Tissue Region Detection with Accelerated Deep
Convolutional Neural Networks [12.7414209590152]
Manual annotation of pathology slides for cancer diagnosis is laborious and repetitive.
Our approach, (FLASH) is based on a Deep Convolutional Neural Network (DCNN) architecture.
It reduces computational costs and is faster than typical deep learning approaches by two orders of magnitude.
arXiv Detail & Related papers (2020-04-18T08:24:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.