Hyperspectral Unmixing Network Inspired by Unfolding an Optimization
Problem
- URL: http://arxiv.org/abs/2005.10856v2
- Date: Sat, 30 May 2020 10:14:58 GMT
- Title: Hyperspectral Unmixing Network Inspired by Unfolding an Optimization
Problem
- Authors: Chao Zhou
- Abstract summary: The hyperspectral image (HSI) unmixing task is essentially an inverse problem, which is commonly solved by optimization algorithms.
We propose two novel network architectures, named U-ADMM-AENet and U-ADMM-BUNet, for abundance estimation and blind unmixing.
We show that the unfolded structures can find corresponding interpretations in machine learning literature, which further demonstrates the effectiveness of proposed methods.
- Score: 2.4016406737205753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The hyperspectral image (HSI) unmixing task is essentially an inverse
problem, which is commonly solved by optimization algorithms under a predefined
(non-)linear mixture model. Although these optimization algorithms show
impressive performance, they are very computational demanding as they often
rely on an iterative updating scheme. Recently, the rise of neural networks has
inspired lots of learning based algorithms in unmixing literature. However,
most of them lack of interpretability and require a large training dataset. One
natural question then arises: can one leverage the model based algorithm and
learning based algorithm to achieve interpretable and fast algorithm for HSI
unmixing problem? In this paper, we propose two novel network architectures,
named U-ADMM-AENet and U-ADMM-BUNet, for abundance estimation and blind
unmixing respectively, by combining the conventional optimization-model based
unmixing method and the rising learning based unmixing method. We first
consider a linear mixture model with sparsity constraint, then we unfold
Alternating Direction Method of Multipliers (ADMM) algorithm to construct the
unmixing network structures. We also show that the unfolded structures can find
corresponding interpretations in machine learning literature, which further
demonstrates the effectiveness of proposed methods. Benefit from the
interpretation, the proposed networks can be initialized by incorporating prior
information about the HSI data. Different from traditional unfolding networks,
we propose a new training strategy for proposed networks to better fit in the
HSI applications. Extensive experiments show that the proposed methods can
achieve much faster convergence and competitive performance even with very
small size of training data, when compared with state-of-art algorithms.
Related papers
- Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - AskewSGD : An Annealed interval-constrained Optimisation method to train
Quantized Neural Networks [12.229154524476405]
We develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights.
Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set.
Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks.
arXiv Detail & Related papers (2022-11-07T18:13:44Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Distributed Optimization, Averaging via ADMM, and Network Topology [0.0]
We study the connection between network topology and convergence rates for different algorithms on a real world problem of sensor localization.
We also show interesting connections between ADMM and lifted Markov chains besides providing an explicitly characterization of its convergence.
arXiv Detail & Related papers (2020-09-05T21:44:39Z) - Deep unfolding of the weighted MMSE beamforming algorithm [9.518010235273783]
We propose the novel application of deep unfolding to the WMMSE algorithm for a MISO downlink channel.
Deep unfolding naturally incorporates expert knowledge, with the benefits of immediate and well-grounded architecture selection, fewer trainable parameters, and better explainability.
By means of simulations, we show that, in most of the settings, the unfolded WMMSE outperforms or performs equally to the WMMSE for a fixed number of iterations.
arXiv Detail & Related papers (2020-06-15T14:51:20Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.