Variational Temporal Deep Generative Model for Radar HRRP Target
Recognition
- URL: http://arxiv.org/abs/2009.13011v1
- Date: Mon, 28 Sep 2020 02:03:51 GMT
- Title: Variational Temporal Deep Generative Model for Radar HRRP Target
Recognition
- Authors: Dandan Guo, Bo Chen (Senior Member, IEEE), Wenchao Chen, Chaojie Wang,
Hongwei Liu (Member, IEEE), and Mingyuan Zhou
- Abstract summary: We develop a recurrent gamma belief network (rGBN) for radar automatic target recognition (RATR) based on high-resolution range profile (HRRP)
The proposed rGBN adopts a hierarchy of gamma distributions to build its temporal deep generative model.
Experimental results on synthetic and measured HRRP data show that the proposed models are efficient in computation, have good classification accuracy and ability, and provide highly interpretable multi-stochastic-layer latent structure.
- Score: 39.01318281591659
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a recurrent gamma belief network (rGBN) for radar automatic target
recognition (RATR) based on high-resolution range profile (HRRP), which
characterizes the temporal dependence across the range cells of HRRP. The
proposed rGBN adopts a hierarchy of gamma distributions to build its temporal
deep generative model. For scalable training and fast out-of-sample prediction,
we propose the hybrid of a stochastic-gradient Markov chain Monte Carlo (MCMC)
and a recurrent variational inference model to perform posterior inference. To
utilize the label information to extract more discriminative latent
representations, we further propose supervised rGBN to jointly model the HRRP
samples and their corresponding labels. Experimental results on synthetic and
measured HRRP data show that the proposed models are efficient in computation,
have good classification accuracy and generalization ability, and provide
highly interpretable multi-stochastic-layer latent structure.
Related papers
- Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - Streaming Gaussian Dirichlet Random Fields for Spatial Predictions of
High Dimensional Categorical Observations [0.0]
We present a novel approach to a stream oftemporally distributed, sparse, high-dimensional categorical observations.
The proposed approach efficiently learns global local patterns in Streaming data.
We demonstrate the ability of a network approach to make more accurate predictions.
arXiv Detail & Related papers (2024-02-23T14:52:05Z) - An Energy-Based Prior for Generative Saliency [62.79775297611203]
We propose a novel generative saliency prediction framework that adopts an informative energy-based model as a prior distribution.
With the generative saliency model, we can obtain a pixel-wise uncertainty map from an image, indicating model confidence in the saliency prediction.
Experimental results show that our generative saliency model with an energy-based prior can achieve not only accurate saliency predictions but also reliable uncertainty maps consistent with human perception.
arXiv Detail & Related papers (2022-04-19T10:51:00Z) - Hierarchical Gaussian Process Models for Regression Discontinuity/Kink
under Sharp and Fuzzy Designs [0.0]
We propose nonparametric Bayesian estimators for causal inference exploiting Regression Discontinuity/Kink (RD/RK)
These estimators are extended to hierarchical GP models with an intermediate Bayesian neural network layer.
Monte Carlo simulations show that our estimators perform similarly and often better than competing estimators in terms of precision, coverage and interval length.
arXiv Detail & Related papers (2021-10-03T04:23:56Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - Few-shot time series segmentation using prototype-defined infinite
hidden Markov models [3.527894538672585]
We propose a framework for interpretable, few-shot analysis of non-stationary sequential data based on flexible graphical models.
We show that RBF networks can be efficiently specified via prototypes allowing us to express complex nonstationary patterns.
The utility of the framework is demonstrated on biomedical signal processing applications such as automated seizure detection from EEG data.
arXiv Detail & Related papers (2021-02-07T19:02:33Z) - Gaussian Process Regression with Local Explanation [28.90948136731314]
We propose GPR with local explanation, which reveals the feature contributions to the prediction of each sample.
In the proposed model, both the prediction and explanation for each sample are performed using an easy-to-interpret locally linear model.
For a new test sample, the proposed model can predict the values of its target variable and weight vector, as well as their uncertainties.
arXiv Detail & Related papers (2020-07-03T13:22:24Z) - Deep Autoencoding Topic Model with Scalable Hybrid Bayesian Inference [55.35176938713946]
We develop deep autoencoding topic model (DATM) that uses a hierarchy of gamma distributions to construct its multi-stochastic-layer generative network.
We propose a Weibull upward-downward variational encoder that deterministically propagates information upward via a deep neural network, followed by a downward generative model.
The efficacy and scalability of our models are demonstrated on both unsupervised and supervised learning tasks on big corpora.
arXiv Detail & Related papers (2020-06-15T22:22:56Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.