Dynamical Hyperspectral Unmixing with Variational Recurrent Neural
Networks
- URL: http://arxiv.org/abs/2303.10566v1
- Date: Sun, 19 Mar 2023 04:51:34 GMT
- Title: Dynamical Hyperspectral Unmixing with Variational Recurrent Neural
Networks
- Authors: Ricardo Augusto Borsoi, Tales Imbiriba, Pau Closas
- Abstract summary: Multitemporal hyperspectral unmixing (MTHU) is a fundamental tool in the analysis of hyperspectral image sequences.
We propose an unsupervised MTHU algorithm based on variational recurrent neural networks.
- Score: 25.051918587650636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multitemporal hyperspectral unmixing (MTHU) is a fundamental tool in the
analysis of hyperspectral image sequences. It reveals the dynamical evolution
of the materials (endmembers) and of their proportions (abundances) in a given
scene. However, adequately accounting for the spatial and temporal variability
of the endmembers in MTHU is challenging, and has not been fully addressed so
far in unsupervised frameworks. In this work, we propose an unsupervised MTHU
algorithm based on variational recurrent neural networks. First, a stochastic
model is proposed to represent both the dynamical evolution of the endmembers
and their abundances, as well as the mixing process. Moreover, a new model
based on a low-dimensional parametrization is used to represent spatial and
temporal endmember variability, significantly reducing the amount of variables
to be estimated. We propose to formulate MTHU as a Bayesian inference problem.
However, the solution to this problem does not have an analytical solution due
to the nonlinearity and non-Gaussianity of the model. Thus, we propose a
solution based on deep variational inference, in which the posterior
distribution of the estimated abundances and endmembers is represented by using
a combination of recurrent neural networks and a physically motivated model.
The parameters of the model are learned using stochastic backpropagation.
Experimental results show that the proposed method outperforms state of the art
MTHU algorithms.
Related papers
- Trajectory Flow Matching with Applications to Clinical Time Series Modeling [77.58277281319253]
Trajectory Flow Matching (TFM) trains a Neural SDE in a simulation-free manner, bypassing backpropagation through the dynamics.
We demonstrate improved performance on three clinical time series datasets in terms of absolute performance and uncertainty prediction.
arXiv Detail & Related papers (2024-10-28T15:54:50Z) - Uncertainty Quantification of Graph Convolution Neural Network Models of
Evolving Processes [0.8749675983608172]
We show that Stein variational inference is a viable alternative to Monte Carlo methods for complex neural network models.
For our exemplars, Stein variational interference gave similar uncertainty profiles through time compared to Hamiltonian Monte Carlo.
arXiv Detail & Related papers (2024-02-17T03:19:23Z) - SPDE priors for uncertainty quantification of end-to-end neural data
assimilation schemes [4.213142548113385]
Recent advances in the deep learning community enables to adress this problem as neural architecture embedding data assimilation variational framework.
In this work, we draw from SPDE-based Processes to estimate prior models able to handle non-stationary covariances in both space and time.
Our neural variational scheme is modified to embed an augmented state formulation with both state SPDE parametrization to estimate.
arXiv Detail & Related papers (2024-02-02T19:18:12Z) - DiffHybrid-UQ: Uncertainty Quantification for Differentiable Hybrid
Neural Modeling [4.76185521514135]
We introduce a novel method, DiffHybrid-UQ, for effective and efficient uncertainty propagation and estimation in hybrid neural differentiable models.
Specifically, our approach effectively discerns and quantifies both aleatoric uncertainties, arising from data noise, and epistemic uncertainties, resulting from model-form discrepancies and data sparsity.
arXiv Detail & Related papers (2023-12-30T07:40:47Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Generalized Neural Closure Models with Interpretability [28.269731698116257]
We develop a novel and versatile methodology of unified neural partial delay differential equations.
We augment existing/low-fidelity dynamical models directly in their partial differential equation (PDE) forms with both Markovian and non-Markovian neural network (NN) closure parameterizations.
We demonstrate the new generalized neural closure models (gnCMs) framework using four sets of experiments based on advecting nonlinear waves, shocks, and ocean acidification models.
arXiv Detail & Related papers (2023-01-15T21:57:43Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Neural Dynamic Mode Decomposition for End-to-End Modeling of Nonlinear
Dynamics [49.41640137945938]
We propose a neural dynamic mode decomposition for estimating a lift function based on neural networks.
With our proposed method, the forecast error is backpropagated through the neural networks and the spectral decomposition.
Our experiments demonstrate the effectiveness of our proposed method in terms of eigenvalue estimation and forecast performance.
arXiv Detail & Related papers (2020-12-11T08:34:26Z) - Physical invariance in neural networks for subgrid-scale scalar flux
modeling [5.333802479607541]
We present a new strategy to model the subgrid-scale scalar flux in a three-dimensional turbulent incompressible flow using physics-informed neural networks (NNs)
We show that the proposed transformation-invariant NN model outperforms both purely data-driven ones and parametric state-of-the-art subgrid-scale models.
arXiv Detail & Related papers (2020-10-09T16:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.