Compressive Independent Component Analysis: Theory and Algorithms
- URL: http://arxiv.org/abs/2110.08045v1
- Date: Fri, 15 Oct 2021 12:19:07 GMT
- Title: Compressive Independent Component Analysis: Theory and Algorithms
- Authors: Michael P. Sheehan and Mike E. Davies
- Abstract summary: We look at the independent component analysis (ICA) model through the compressive learning lens.
We show that solutions to the cumulant based ICA model have particular structure that induces a low dimensional model set.
We propose two algorithms of the form of an iterative gradient projection (IPG) and an alternating steepest descent (ASD) algorithm for compressive ICA.
- Score: 16.594813920535486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Compressive learning forms the exciting intersection between compressed
sensing and statistical learning where one exploits forms of sparsity and
structure to reduce the memory and/or computational complexity of the learning
task. In this paper, we look at the independent component analysis (ICA) model
through the compressive learning lens. In particular, we show that solutions to
the cumulant based ICA model have particular structure that induces a low
dimensional model set that resides in the cumulant tensor space. By showing a
restricted isometry property holds for random cumulants e.g. Gaussian
ensembles, we prove the existence of a compressive ICA scheme. Thereafter, we
propose two algorithms of the form of an iterative projection gradient (IPG)
and an alternating steepest descent (ASD) algorithm for compressive ICA, where
the order of compression asserted from the restricted isometry property is
realised through empirical results. We provide analysis of the CICA algorithms
including the effects of finite samples. The effects of compression are
characterised by a trade-off between the sketch size and the statistical
efficiency of the ICA estimates. By considering synthetic and real datasets, we
show the substantial memory gains achieved over well-known ICA algorithms by
using one of the proposed CICA algorithms. Finally, we conclude the paper with
open problems including interesting challenges from the emerging field of
compressive learning.
Related papers
- Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise [19.496063739638924]
We consider a saturate problem of Bayesian inference for a structured spiked model.
We show how to predict the statistical limits using an efficient algorithm inspired by the theory of adaptive Thouless-Anderson-Palmer equations.
arXiv Detail & Related papers (2024-05-31T16:38:35Z) - Quantization of Large Language Models with an Overdetermined Basis [73.79368761182998]
We introduce an algorithm for data quantization based on the principles of Kashin representation.
Our findings demonstrate that Kashin Quantization achieves competitive or superior quality in model performance.
arXiv Detail & Related papers (2024-04-15T12:38:46Z) - An efficient quantum algorithm for independent component analysis [3.400945485383699]
Independent component analysis (ICA) is a fundamental data processing technique to decompose the captured signals into as independent as possible components.
This paper presents a quantum ICA algorithm which focuses on computing a specified contrast function on a quantum computer.
arXiv Detail & Related papers (2023-11-21T11:21:23Z) - Forward and Backward Constrained Bisimulations for Quantum Circuits using Decision Diagrams [3.788308836856851]
We develop efficient methods for the simulation of quantum circuits on classic computers.
In particular, we show that constrained bisimulation can boost decision-diagram-based quantum circuit simulation by several orders of magnitude.
arXiv Detail & Related papers (2023-08-18T12:40:47Z) - Deep Unrolling for Nonconvex Robust Principal Component Analysis [75.32013242448151]
We design algorithms for Robust Component Analysis (A)
It consists in decomposing a matrix into the sum of a low Principaled matrix and a sparse Principaled matrix.
arXiv Detail & Related papers (2023-07-12T03:48:26Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Stochastic Approximation for Online Tensorial Independent Component
Analysis [98.34292831923335]
Independent component analysis (ICA) has been a popular dimension reduction tool in statistical machine learning and signal processing.
In this paper, we present a by-product online tensorial algorithm that estimates for each independent component.
arXiv Detail & Related papers (2020-12-28T18:52:37Z) - Expectation propagation on the diluted Bayesian classifier [0.0]
We introduce a statistical mechanics inspired strategy that addresses the problem of sparse feature selection in the context of binary classification.
A computational scheme known as expectation propagation (EP) is used to train a continuous-weights perceptron learning a classification rule.
EP is a robust and competitive algorithm in terms of variable selection properties, estimation accuracy and computational complexity.
arXiv Detail & Related papers (2020-09-20T23:59:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.