Efficient Estimation of Unique Components in Independent Component Analysis by Matrix Representation
- URL: http://arxiv.org/abs/2408.17118v1
- Date: Fri, 30 Aug 2024 09:01:04 GMT
- Title: Efficient Estimation of Unique Components in Independent Component Analysis by Matrix Representation
- Authors: Yoshitatsu Matsuda, Kazunori Yamaguch,
- Abstract summary: Independent component analysis (ICA) is a widely used method in various applications of signal processing and feature extraction.
In this paper, the unique estimation of ICA is highly accelerated by reformulating the algorithm in matrix representation.
Experimental results on artificial datasets and EEG data verified the efficiency of the proposed method.
- Score: 1.0282274843007793
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Independent component analysis (ICA) is a widely used method in various applications of signal processing and feature extraction. It extends principal component analysis (PCA) and can extract important and complicated components with small variances. One of the major problems of ICA is that the uniqueness of the solution is not guaranteed, unlike PCA. That is because there are many local optima in optimizing the objective function of ICA. It has been shown previously that the unique global optimum of ICA can be estimated from many random initializations by handcrafted thread computation. In this paper, the unique estimation of ICA is highly accelerated by reformulating the algorithm in matrix representation and reducing redundant calculations. Experimental results on artificial datasets and EEG data verified the efficiency of the proposed method.
Related papers
- L1-Regularized ICA: A Novel Method for Analysis of Task-related fMRI Data [0.0]
We propose a new method of independent component analysis (ICA) in order to extract appropriate features from high-dimensional data.
For the validity of our proposed method, we apply it to synthetic data and real functional magnetic resonance imaging data.
arXiv Detail & Related papers (2024-10-17T02:54:01Z) - An efficient quantum algorithm for independent component analysis [3.400945485383699]
Independent component analysis (ICA) is a fundamental data processing technique to decompose the captured signals into as independent as possible components.
This paper presents a quantum ICA algorithm which focuses on computing a specified contrast function on a quantum computer.
arXiv Detail & Related papers (2023-11-21T11:21:23Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Second-order Approximation of Minimum Discrimination Information in
Independent Component Analysis [5.770800671793959]
Independent Component Analysis (ICA) is intended to recover mutually independent sources from their linear mixtures.
F astICA is one of the most successful ICA algorithms.
We propose a novel method based on the second-order approximation of minimum discrimination information.
arXiv Detail & Related papers (2021-11-30T01:51:08Z) - Shared Independent Component Analysis for Multi-Subject Neuroimaging [107.29179765643042]
We introduce Shared Independent Component Analysis (ShICA) that models each view as a linear transform of shared independent components contaminated by additive Gaussian noise.
We show that this model is identifiable if the components are either non-Gaussian or have enough diversity in noise variances.
We provide empirical evidence on fMRI and MEG datasets that ShICA yields more accurate estimation of the components than alternatives.
arXiv Detail & Related papers (2021-10-26T08:54:41Z) - Self-paced Principal Component Analysis [17.333976289539457]
We propose a novel method called Self-paced PCA (SPCA) to further reduce the effect of noise and outliers.
The complexity of each sample is calculated at the beginning of each iteration in order to integrate samples from simple to more complex into training.
arXiv Detail & Related papers (2021-06-25T20:50:45Z) - Feature Weighted Non-negative Matrix Factorization [92.45013716097753]
We propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper.
FNMF learns the weights of features adaptively according to their importances.
It can be solved efficiently with the suggested optimization algorithm.
arXiv Detail & Related papers (2021-03-24T21:17:17Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Stochastic Approximation for Online Tensorial Independent Component
Analysis [98.34292831923335]
Independent component analysis (ICA) has been a popular dimension reduction tool in statistical machine learning and signal processing.
In this paper, we present a by-product online tensorial algorithm that estimates for each independent component.
arXiv Detail & Related papers (2020-12-28T18:52:37Z) - A Framework for Private Matrix Analysis [20.407204637672887]
We give first efficient $o(W)$ space differentially private algorithms for spectral approximation, principal component analysis, and linear regression.
We also initiate and show efficient differentially private algorithms for two important variants of principal component analysis.
arXiv Detail & Related papers (2020-09-06T08:01:59Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.