A Generalized Multiscale Bundle-Based Hyperspectral Sparse Unmixing
Algorithm
- URL: http://arxiv.org/abs/2401.13161v1
- Date: Wed, 24 Jan 2024 00:37:14 GMT
- Title: A Generalized Multiscale Bundle-Based Hyperspectral Sparse Unmixing
Algorithm
- Authors: Luciano Carvalho Ayres, Ricardo Augusto Borsoi, Jos\'e Carlos Moreira
Bermudez, S\'ergio Jos\'e Melo de Almeida
- Abstract summary: In hyperspectral sparse unmixing, a successful approach employs spectral bundles to address the variability of the endmembers in the spatial domain.
We generalize a multiscale spatial regularization approach to solve the unmixing problem by incorporating group sparsity-inducing mixed norms.
- Score: 8.616208042031877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In hyperspectral sparse unmixing, a successful approach employs spectral
bundles to address the variability of the endmembers in the spatial domain.
However, the regularization penalties usually employed aggregate substantial
computational complexity, and the solutions are very noise-sensitive. We
generalize a multiscale spatial regularization approach to solve the unmixing
problem by incorporating group sparsity-inducing mixed norms. Then, we propose
a noise-robust method that can take advantage of the bundle structure to deal
with endmember variability while ensuring inter- and intra-class sparsity in
abundance estimation with reasonable computational cost. We also present a
general heuristic to select the \emph{most representative} abundance estimation
over multiple runs of the unmixing process, yielding a solution that is robust
and highly reproducible. Experiments illustrate the robustness and consistency
of the results when compared to related methods.
Related papers
- Analysis of the Non-variational Quantum Walk-based Optimisation Algorithm [0.0]
This paper introduces in detail a non-variational quantum algorithm designed to solve a wide range of optimisation problems.
The algorithm returns optimal and near-optimal solutions from repeated preparation and measurement of an amplified state.
arXiv Detail & Related papers (2024-07-29T13:54:28Z) - Sparsity via Sparse Group $k$-max Regularization [22.05774771336432]
In this paper, we propose a novel and concise regularization, namely the sparse group $k$-max regularization.
We verify the effectiveness and flexibility of the proposed method through numerical experiments on both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-02-13T14:41:28Z) - Fast Semisupervised Unmixing Using Nonconvex Optimization [80.11512905623417]
We introduce a novel convex convex model for semi/library-based unmixing.
We demonstrate the efficacy of Alternating Methods of sparse unsupervised unmixing.
arXiv Detail & Related papers (2024-01-23T10:07:41Z) - Pixel-to-Abundance Translation: Conditional Generative Adversarial
Networks Based on Patch Transformer for Hyperspectral Unmixing [12.976092623812757]
Spectral unmixing is a significant challenge in hyperspectral image processing.
We propose a hyperspectral conditional generative adversarial network (HyperGAN) method as a generic unmixing framework.
Experiments on synthetic data and real hyperspectral data achieve impressive results compared to state-of-the-art competitors.
arXiv Detail & Related papers (2023-12-20T15:47:21Z) - High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise [96.80184504268593]
gradient, clipping is one of the key algorithmic ingredients to derive good high-probability guarantees.
Clipping can spoil the convergence of the popular methods for composite and distributed optimization.
arXiv Detail & Related papers (2023-10-03T07:49:17Z) - First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities [91.46841922915418]
We present a unified approach for the theoretical analysis of first-order variation methods.
Our approach covers both non-linear gradient and strongly Monte Carlo problems.
We provide bounds that match the oracle strongly in the case of convex method optimization problems.
arXiv Detail & Related papers (2023-05-25T11:11:31Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Tight integration of neural- and clustering-based diarization through
deep unfolding of infinite Gaussian mixture model [84.57667267657382]
This paper introduces a it trainable clustering algorithm into the integration framework.
Speaker embeddings are optimized during training such that it better fits iGMM clustering.
Experimental results show that the proposed approach outperforms the conventional approach in terms of diarization error rate.
arXiv Detail & Related papers (2022-02-14T07:45:21Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Learning Mixtures of Permutations: Groups of Pairwise Comparisons and
Combinatorial Method of Moments [8.691957530860675]
We study the widely used Mallows mixture model.
In the high-dimensional setting, we propose an optimal-time algorithm that learns a Mallows mixture of permutations on $n$ elements.
arXiv Detail & Related papers (2020-09-14T23:11:46Z) - Mixture Representation Learning with Coupled Autoencoders [1.589915930948668]
We propose an unsupervised variational framework using multiple interacting networks called cpl-mixVAE.
In this framework, the mixture representation of each network is regularized by imposing a consensus constraint on the discrete factor.
We use the proposed method to jointly uncover discrete and continuous factors of variability describing gene expression in a single-cell transcriptomic dataset.
arXiv Detail & Related papers (2020-07-20T04:12:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.