Learning Interpretable Deep Disentangled Neural Networks for
Hyperspectral Unmixing
- URL: http://arxiv.org/abs/2310.02340v1
- Date: Tue, 3 Oct 2023 18:21:37 GMT
- Title: Learning Interpretable Deep Disentangled Neural Networks for
Hyperspectral Unmixing
- Authors: Ricardo Augusto Borsoi, Deniz Erdo\u{g}mu\c{s}, Tales Imbiriba
- Abstract summary: We propose a new interpretable deep learning method for hyperspectral unmixing that accounts for nonlinearity and endmember variability.
The model is learned end-to-end using backpropagation, and trained using a self-supervised strategy.
Experimental results on synthetic and real datasets illustrate the performance of the proposed method.
- Score: 16.02193274044797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although considerable effort has been dedicated to improving the solution to
the hyperspectral unmixing problem, non-idealities such as complex radiation
scattering and endmember variability negatively impact the performance of most
existing algorithms and can be very challenging to address. Recently, deep
learning-based frameworks have been explored for hyperspectral umixing due to
their flexibility and powerful representation capabilities. However, such
techniques either do not address the non-idealities of the unmixing problem, or
rely on black-box models which are not interpretable. In this paper, we propose
a new interpretable deep learning method for hyperspectral unmixing that
accounts for nonlinearity and endmember variability. The proposed method
leverages a probabilistic variational deep-learning framework, where
disentanglement learning is employed to properly separate the abundances and
endmembers. The model is learned end-to-end using stochastic backpropagation,
and trained using a self-supervised strategy which leverages benefits from
semi-supervised learning techniques. Furthermore, the model is carefully
designed to provide a high degree of interpretability. This includes modeling
the abundances as a Dirichlet distribution, the endmembers using
low-dimensional deep latent variable representations, and using two-stream
neural networks composed of additive piecewise-linear/nonlinear components.
Experimental results on synthetic and real datasets illustrate the performance
of the proposed method compared to state-of-the-art algorithms.
Related papers
- Deep Nonlinear Hyperspectral Unmixing Using Multi-task Learning [0.0]
In this paper, we propose an unsupervised nonlinear unmixing approach based on deep learning.
We introduce an auxiliary task to enforce the two branches to work together.
This technique can be considered as a regularizer mitigating overfitting, which improves the performance of the total network.
arXiv Detail & Related papers (2024-02-05T02:52:25Z) - Batch Active Learning from the Perspective of Sparse Approximation [12.51958241746014]
Active learning enables efficient model training by leveraging interactions between machine learning agents and human annotators.
We study and propose a novel framework that formulates batch active learning from the sparse approximation's perspective.
Our active learning method aims to find an informative subset from the unlabeled data pool such that the corresponding training loss function approximates its full data pool counterpart.
arXiv Detail & Related papers (2022-11-01T03:20:28Z) - Bayesian Spline Learning for Equation Discovery of Nonlinear Dynamics
with Quantified Uncertainty [8.815974147041048]
We develop a novel framework to identify parsimonious governing equations of nonlinear (spatiotemporal) dynamics from sparse, noisy data with quantified uncertainty.
The proposed algorithm is evaluated on multiple nonlinear dynamical systems governed by canonical ordinary and partial differential equations.
arXiv Detail & Related papers (2022-10-14T20:37:36Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of
Convolutional Neural Networks [26.434705114982584]
We propose an efficient interpretation method for convolutional neural networks.
Experimental results show that the proposed method can reduce the execution time up to 30%.
arXiv Detail & Related papers (2021-02-15T19:10:00Z) - Deep Unfolding Network for Image Super-Resolution [159.50726840791697]
This paper proposes an end-to-end trainable unfolding network which leverages both learning-based methods and model-based methods.
The proposed network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model.
arXiv Detail & Related papers (2020-03-23T17:55:42Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.