On the Convergence of the Dynamic Inner PCA Algorithm
- URL: http://arxiv.org/abs/2003.05928v1
- Date: Thu, 12 Mar 2020 17:50:34 GMT
- Title: On the Convergence of the Dynamic Inner PCA Algorithm
- Authors: Sungho Shin, Alex D. Smith, S. Joe Qin, Victor M. Zavala
- Abstract summary: DiPCA is a powerful method for the analysis of time-dependent data.
We show that this is a specialized variant of a coordinate decomposition algorithm.
We compare the performance of the decomposition strategies with that of the off-the-shelf It.
- Score: 5.9931120596636935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic inner principal component analysis (DiPCA) is a powerful method for
the analysis of time-dependent multivariate data. DiPCA extracts dynamic latent
variables that capture the most dominant temporal trends by solving a
large-scale, dense, and nonconvex nonlinear program (NLP). A scalable
decomposition algorithm has been recently proposed in the literature to solve
these challenging NLPs. The decomposition algorithm performs well in practice
but its convergence properties are not well understood. In this work, we show
that this algorithm is a specialized variant of a coordinate maximization
algorithm. This observation allows us to explain why the decomposition
algorithm might work (or not) in practice and can guide improvements. We
compare the performance of the decomposition strategies with that of the
off-the-shelf solver Ipopt. The results show that decomposition is more
scalable and, surprisingly, delivers higher quality solutions.
Related papers
- Multi-body dynamic evolution sequence-assisted PSO for interval analysis [0.0]
This paper proposes a novel interval analysis method, i.e., multi-body dynamic evolution sequence-assisted PSO.
By introducing the dynamical evolutionary sequence instead of the random sequence, the proposed method addresses the difficulty HCLPSO faces in covering the search space.
The results of the case studies demonstrate that DES-PSO can significantly improve the computational speed of interval analysis.
arXiv Detail & Related papers (2024-09-21T10:17:27Z) - Recovering Linear Causal Models with Latent Variables via Cholesky
Factorization of Covariance Matrix [21.698480201955213]
We propose a DAG structure recovering algorithm, which is based on the Cholesky factorization of the covariance matrix of the observed data.
On synthetic and real-world datasets, the algorithm is significantly faster than previous methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2023-11-01T17:27:49Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Adaptive Stochastic Optimisation of Nonconvex Composite Objectives [2.1700203922407493]
We propose and analyse a family of generalised composite mirror descent algorithms.
With adaptive step sizes, the proposed algorithms converge without requiring prior knowledge of the problem.
We exploit the low-dimensional structure of the decision sets for high-dimensional problems.
arXiv Detail & Related papers (2022-11-21T18:31:43Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z) - Optimization of Graph Total Variation via Active-Set-based Combinatorial
Reconditioning [48.42916680063503]
We propose a novel adaptive preconditioning strategy for proximal algorithms on this problem class.
We show that nested-forest decomposition of the inactive edges yields a guaranteed local linear convergence rate.
Our results suggest that local convergence analysis can serve as a guideline for selecting variable metrics in proximal algorithms.
arXiv Detail & Related papers (2020-02-27T16:33:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.