Deep Nonlinear Hyperspectral Unmixing Using Multi-task Learning
- URL: http://arxiv.org/abs/2402.03398v1
- Date: Mon, 5 Feb 2024 02:52:25 GMT
- Title: Deep Nonlinear Hyperspectral Unmixing Using Multi-task Learning
- Authors: Saeid Mehrdad, Seyed AmirHossein Janani
- Abstract summary: In this paper, we propose an unsupervised nonlinear unmixing approach based on deep learning.
We introduce an auxiliary task to enforce the two branches to work together.
This technique can be considered as a regularizer mitigating overfitting, which improves the performance of the total network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nonlinear hyperspectral unmixing has recently received considerable
attention, as linear mixture models do not lead to an acceptable resolution in
some problems. In fact, most nonlinear unmixing methods are designed by
assuming specific assumptions on the nonlinearity model which subsequently
limits the unmixing performance. In this paper, we propose an unsupervised
nonlinear unmixing approach based on deep learning by incorporating a general
nonlinear model with no special assumptions. This model consists of two
branches. In the first branch, endmembers are learned by reconstructing the
rows of hyperspectral images using some hidden layers, and in the second
branch, abundance values are learned based on the columns of respective images.
Then, using multi-task learning, we introduce an auxiliary task to enforce the
two branches to work together. This technique can be considered as a
regularizer mitigating overfitting, which improves the performance of the total
network. Extensive experiments on synthetic and real data verify the
effectiveness of the proposed method compared to some state-of-the-art
hyperspectral unmixing methods.
Related papers
- Nonlinear denoising score matching for enhanced learning of structured distributions [12.428200977408817]
Generalizing to a nonlinear drift allows for additional structure to be incorporated into the dynamics.
We demonstrate the effectiveness of this method on several examples.
arXiv Detail & Related papers (2024-05-24T15:14:23Z) - Fast Semisupervised Unmixing Using Nonconvex Optimization [80.11512905623417]
We introduce a novel convex convex model for semi/library-based unmixing.
We demonstrate the efficacy of Alternating Methods of sparse unsupervised unmixing.
arXiv Detail & Related papers (2024-01-23T10:07:41Z) - Learning Interpretable Deep Disentangled Neural Networks for
Hyperspectral Unmixing [16.02193274044797]
We propose a new interpretable deep learning method for hyperspectral unmixing that accounts for nonlinearity and endmember variability.
The model is learned end-to-end using backpropagation, and trained using a self-supervised strategy.
Experimental results on synthetic and real datasets illustrate the performance of the proposed method.
arXiv Detail & Related papers (2023-10-03T18:21:37Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - Nonlinear Hyperspectral Unmixing based on Multilinear Mixing Model using
Convolutional Autoencoders [6.867229549627128]
We propose a novel autoencoder-based network for unsupervised unmixing based on reflection.
Experiments on both the synthetic and real datasets demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2023-03-14T18:11:52Z) - MixupE: Understanding and Improving Mixup from Directional Derivative
Perspective [86.06981860668424]
We propose an improved version of Mixup, theoretically justified to deliver better generalization performance than the vanilla Mixup.
Our results show that the proposed method improves Mixup across multiple datasets using a variety of architectures.
arXiv Detail & Related papers (2022-12-27T07:03:52Z) - Mixture-Net: Low-Rank Deep Image Prior Inspired by Mixture Models for
Spectral Image Recovery [22.0246327137227]
This paper proposes a non-data-driven deep neural network for spectral image recovery problems.
The proposed approach, dubbed Mixture-Net, implicitly learns the prior information through the network.
Experiments show the MixtureNet outperforming state-of-the-art methods in recovery quality with the advantage of architecture interpretability.
arXiv Detail & Related papers (2022-11-05T21:32:25Z) - Hessian Eigenspectra of More Realistic Nonlinear Models [73.31363313577941]
We make a emphprecise characterization of the Hessian eigenspectra for a broad family of nonlinear models.
Our analysis takes a step forward to identify the origin of many striking features observed in more complex machine learning models.
arXiv Detail & Related papers (2021-03-02T06:59:52Z) - Nonlinear Independent Component Analysis for Continuous-Time Signals [85.59763606620938]
We study the classical problem of recovering a multidimensional source process from observations of mixtures of this process.
We show that this recovery is possible for many popular models of processes (up to order and monotone scaling of their coordinates) if the mixture is given by a sufficiently differentiable, invertible function.
arXiv Detail & Related papers (2021-02-04T20:28:44Z) - On the Treatment of Optimization Problems with L1 Penalty Terms via
Multiobjective Continuation [0.0]
We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization.
Our method can be seen as a generalization of well-known homotopy methods for linear regression problems to the nonlinear case.
arXiv Detail & Related papers (2020-12-14T13:00:50Z) - Learning Mixtures of Low-Rank Models [89.39877968115833]
We study the problem of learning computational mixtures of low-rank models.
We develop an algorithm that is guaranteed to recover the unknown matrices with near-optimal sample.
In addition, the proposed algorithm is provably stable against random noise.
arXiv Detail & Related papers (2020-09-23T17:53:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.