L1-Regularized ICA: A Novel Method for Analysis of Task-related fMRI Data
- URL: http://arxiv.org/abs/2410.13171v1
- Date: Thu, 17 Oct 2024 02:54:01 GMT
- Title: L1-Regularized ICA: A Novel Method for Analysis of Task-related fMRI Data
- Authors: Yusuke Endo, Koujin Takeda,
- Abstract summary: We propose a new method of independent component analysis (ICA) in order to extract appropriate features from high-dimensional data.
For the validity of our proposed method, we apply it to synthetic data and real functional magnetic resonance imaging data.
- Score: 0.0
- License:
- Abstract: We propose a new method of independent component analysis (ICA) in order to extract appropriate features from high-dimensional data. In general, matrix factorization methods including ICA have a problem regarding the interpretability of extracted features. For the improvement of interpretability, it is considered that sparse constraint on a factorized matrix is helpful. With this background, we construct a new ICA method with sparsity. In our method, the L1-regularization term is added to the cost function of ICA, and minimization of the cost function is performed by difference of convex functions algorithm. For the validity of our proposed method, we apply it to synthetic data and real functional magnetic resonance imaging data.
Related papers
- Efficient Estimation of Unique Components in Independent Component Analysis by Matrix Representation [1.0282274843007793]
Independent component analysis (ICA) is a widely used method in various applications of signal processing and feature extraction.
In this paper, the unique estimation of ICA is highly accelerated by reformulating the algorithm in matrix representation.
Experimental results on artificial datasets and EEG data verified the efficiency of the proposed method.
arXiv Detail & Related papers (2024-08-30T09:01:04Z) - Generating gradients in the energy landscape using rectified linear type
cost functions for efficiently solving 0/1 matrix factorization in Simulated
Annealing [7.339479909020814]
We propose a method to facilitate the solution process by applying a gradient to the energy landscape.
We also propose a method to quickly obtain a solution by updating the cost function's gradient during the search process.
arXiv Detail & Related papers (2023-12-27T04:19:47Z) - Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Nonlinear Feature Aggregation: Two Algorithms driven by Theory [45.3190496371625]
Real-world machine learning applications are characterized by a huge number of features, leading to computational and memory issues.
We propose a dimensionality reduction algorithm (NonLinCFA) which aggregates non-linear transformations of features with a generic aggregation function.
We also test the algorithms on synthetic and real-world datasets, performing regression and classification tasks, showing competitive performances.
arXiv Detail & Related papers (2023-06-19T19:57:33Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Second-order Approximation of Minimum Discrimination Information in
Independent Component Analysis [5.770800671793959]
Independent Component Analysis (ICA) is intended to recover mutually independent sources from their linear mixtures.
F astICA is one of the most successful ICA algorithms.
We propose a novel method based on the second-order approximation of minimum discrimination information.
arXiv Detail & Related papers (2021-11-30T01:51:08Z) - Efficient Multidimensional Functional Data Analysis Using Marginal
Product Basis Systems [2.4554686192257424]
We propose a framework for learning continuous representations from a sample of multidimensional functional data.
We show that the resulting estimation problem can be solved efficiently by the tensor decomposition.
We conclude with a real data application in neuroimaging.
arXiv Detail & Related papers (2021-07-30T16:02:15Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Logistic Q-Learning [87.00813469969167]
We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs.
The main feature of our algorithm is a convex loss function for policy evaluation that serves as a theoretically sound alternative to the widely used squared Bellman error.
arXiv Detail & Related papers (2020-10-21T17:14:31Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.