Second-order Approximation of Minimum Discrimination Information in
Independent Component Analysis
- URL: http://arxiv.org/abs/2111.15060v1
- Date: Tue, 30 Nov 2021 01:51:08 GMT
- Title: Second-order Approximation of Minimum Discrimination Information in
Independent Component Analysis
- Authors: YunPeng Li
- Abstract summary: Independent Component Analysis (ICA) is intended to recover mutually independent sources from their linear mixtures.
F astICA is one of the most successful ICA algorithms.
We propose a novel method based on the second-order approximation of minimum discrimination information.
- Score: 5.770800671793959
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Independent Component Analysis (ICA) is intended to recover the mutually
independent sources from their linear mixtures, and F astICA is one of the most
successful ICA algorithms. Although it seems reasonable to improve the
performance of F astICA by introducing more nonlinear functions to the
negentropy estimation, the original fixed-point method (approximate Newton
method) in F astICA degenerates under this circumstance. To alleviate this
problem, we propose a novel method based on the second-order approximation of
minimum discrimination information (MDI). The joint maximization in our method
is consisted of minimizing single weighted least squares and seeking unmixing
matrix by the fixed-point method. Experimental results validate its efficiency
compared with other popular ICA algorithms.
Related papers
- L1-Regularized ICA: A Novel Method for Analysis of Task-related fMRI Data [0.0]
We propose a new method of independent component analysis (ICA) in order to extract appropriate features from high-dimensional data.
For the validity of our proposed method, we apply it to synthetic data and real functional magnetic resonance imaging data.
arXiv Detail & Related papers (2024-10-17T02:54:01Z) - Efficient Estimation of Unique Components in Independent Component Analysis by Matrix Representation [1.0282274843007793]
Independent component analysis (ICA) is a widely used method in various applications of signal processing and feature extraction.
In this paper, the unique estimation of ICA is highly accelerated by reformulating the algorithm in matrix representation.
Experimental results on artificial datasets and EEG data verified the efficiency of the proposed method.
arXiv Detail & Related papers (2024-08-30T09:01:04Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - An optimal pairwise merge algorithm improves the quality and consistency of nonnegative matrix factorization [0.0]
Non-negative matrix factorization (NMF) is a key technique for feature extraction and widely used in source separation.
Here we show that some of these weaknesses may be mitigated by performing NMF in a higher-dimensional feature space.
Experimental results demonstrate our method helps non-ideal NMF solutions escape to better local optima.
arXiv Detail & Related papers (2024-08-16T20:43:42Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - A Novel Maximum-Entropy-Driven Technique for Low-Rank Orthogonal
Nonnegative Matrix Factorization with $\ell_0$-Norm sparsity Constraint [0.0]
In data-driven control and machine learning, a common requirement involves breaking down large matrices into smaller, low-rank factors.
This paper introduces an innovative solution to the orthogonal nonnegative matrix factorization (ONMF) problem.
The proposed method achieves comparable or improved reconstruction errors in line with the literature.
arXiv Detail & Related papers (2022-10-06T04:30:59Z) - Efficient Approximations of the Fisher Matrix in Neural Networks using
Kronecker Product Singular Value Decomposition [0.0]
It is shown that natural gradient descent can minimize the objective function more efficiently than ordinary gradient descent based methods.
The bottleneck of this approach for training deep neural networks lies in the prohibitive cost of solving a large dense linear system corresponding to the Fisher Information Matrix (FIM) at each iteration.
This has motivated various approximations of either the exact FIM or the empirical one.
The most sophisticated of these is KFAC, which involves a Kronecker-factored block diagonal approximation of the FIM.
With only a slight additional cost, a few improvements of KFAC from the standpoint of accuracy are proposed
arXiv Detail & Related papers (2022-01-25T12:56:17Z) - Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution
Differential Equations [83.3201889218775]
Several widely-used first-order saddle-point optimization methods yield an identical continuous-time ordinary differential equation (ODE) when derived naively.
However, the convergence properties of these methods are qualitatively different, even on simple bilinear games.
We adopt a framework studied in fluid dynamics to design differential equation models for several saddle-point optimization methods.
arXiv Detail & Related papers (2021-12-27T18:31:34Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.