Fast Semi-supervised Unmixing using Non-convex Optimization
- URL: http://arxiv.org/abs/2401.12609v1
- Date: Tue, 23 Jan 2024 10:07:41 GMT
- Title: Fast Semi-supervised Unmixing using Non-convex Optimization
- Authors: Behnood Rasti, Alexandre Zouaoui, Julien Mairal, Jocelyn Chanussot
- Abstract summary: We introduce a novel convex convex model for semi/library-based unmixing.
We demonstrate the efficacy of Alternating Methods of sparse unsupervised unmixing.
- Score: 85.95119207126292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a novel linear model tailored for
semisupervised/library-based unmixing. Our model incorporates considerations
for library mismatch while enabling the enforcement of the abundance sum-to-one
constraint (ASC). Unlike conventional sparse unmixing methods, this model
involves nonconvex optimization, presenting significant computational
challenges. We demonstrate the efficacy of Alternating Methods of Multipliers
(ADMM) in cyclically solving these intricate problems. We propose two
semisupervised unmixing approaches, each relying on distinct priors applied to
the new model in addition to the ASC: sparsity prior and convexity constraint.
Our experimental results validate that enforcing the convexity constraint
outperforms the sparsity prior for the endmember library. These results are
corroborated across three simulated datasets (accounting for spectral
variability and varying pixel purity levels) and the Cuprite dataset.
Additionally, our comparison with conventional sparse unmixing methods
showcases considerable advantages of our proposed model, which entails
nonconvex optimization. Notably, our implementations of the proposed
algorithms-fast semisupervised unmixing (FaSUn) and sparse unmixing using
soft-shrinkage (SUnS)-prove considerably more efficient than traditional sparse
unmixing methods. SUnS and FaSUn were implemented using PyTorch and provided in
a dedicated Python package called Fast Semisupervised Unmixing (FUnmix), which
is open-source and available at https://github.com/BehnoodRasti/FUnmix
Related papers
- Gaussian Mixture Solvers for Diffusion Models [84.83349474361204]
We introduce a novel class of SDE-based solvers called GMS for diffusion models.
Our solver outperforms numerous SDE-based solvers in terms of sample quality in image generation and stroke-based synthesis.
arXiv Detail & Related papers (2023-11-02T02:05:38Z) - AMPLIFY:Attention-based Mixup for Performance Improvement and Label Smoothing in Transformer [2.3072402651280517]
AMPLIFY uses the Attention mechanism of Transformer itself to reduce the influence of noises and aberrant values in the original samples on the prediction results.
The experimental results show that, under a smaller computational resource cost, AMPLIFY outperforms other Mixup methods in text classification tasks.
arXiv Detail & Related papers (2023-09-22T08:02:45Z) - SUnAA: Sparse Unmixing using Archetypal Analysis [62.997667081978825]
This paper introduces a new geological error map technique using archetypal sparse analysis (SUnAA)
First, we design a new model based on archetypal sparse analysis (SUnAA)
arXiv Detail & Related papers (2023-08-09T07:58:33Z) - Bayesian Pseudo-Coresets via Contrastive Divergence [5.479797073162603]
We introduce a novel approach for constructing pseudo-coresets by utilizing contrastive divergence.
It eliminates the need for approximations in the pseudo-coreset construction process.
We conduct extensive experiments on multiple datasets, demonstrating its superiority over existing BPC techniques.
arXiv Detail & Related papers (2023-03-20T17:13:50Z) - A distribution-free mixed-integer optimization approach to hierarchical modelling of clustered and longitudinal data [0.0]
We introduce an innovative algorithm that evaluates cluster effects for new data points, thereby increasing the robustness and precision of this model.
The inferential and predictive efficacy of this approach is further illustrated through its application in student scoring and protein expression.
arXiv Detail & Related papers (2023-02-06T23:34:51Z) - A Robust and Flexible EM Algorithm for Mixtures of Elliptical
Distributions with Missing Data [71.9573352891936]
This paper tackles the problem of missing data imputation for noisy and non-Gaussian data.
A new EM algorithm is investigated for mixtures of elliptical distributions with the property of handling potential missing data.
Experimental results on synthetic data demonstrate that the proposed algorithm is robust to outliers and can be used with non-Gaussian data.
arXiv Detail & Related papers (2022-01-28T10:01:37Z) - Stochastic Projective Splitting: Solving Saddle-Point Problems with
Multiple Regularizers [4.568911586155097]
We present a new, variant of the projective splitting (PS) family of monotone algorithms for inclusion problems.
It can solve min-max and noncooperative game formulations arising in applications such as robust ML without the convergence issues associated with gradient descent-ascent.
arXiv Detail & Related papers (2021-06-24T14:48:43Z) - On Stochastic Moving-Average Estimators for Non-Convex Optimization [105.22760323075008]
In this paper, we demonstrate the power of a widely used estimator based on moving average (SEMA) problems.
For all these-the-art results, we also present the results for all these-the-art problems.
arXiv Detail & Related papers (2021-04-30T08:50:24Z) - An Adaptive EM Accelerator for Unsupervised Learning of Gaussian Mixture
Models [0.7340845393655052]
We propose an Anderson Acceleration scheme for the adaptive Expectation-Maximization (EM) algorithm for unsupervised learning.
The proposed algorithm is able to determine the optimal number of mixture components autonomously, and converges to the optimal solution much faster than its non-accelerated version.
arXiv Detail & Related papers (2020-09-26T22:55:44Z) - Clustering Binary Data by Application of Combinatorial Optimization
Heuristics [52.77024349608834]
We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters.
Five new and original methods are introduced, using neighborhoods and population behavior optimization metaheuristics.
From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means: partitioning around medoids or PAM.
arXiv Detail & Related papers (2020-01-06T23:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.