Learning Transition Operators From Sparse Space-Time Samples
- URL: http://arxiv.org/abs/2212.00746v1
- Date: Thu, 1 Dec 2022 18:33:59 GMT
- Title: Learning Transition Operators From Sparse Space-Time Samples
- Authors: Christian K\"ummerle, Mauro Maggioni, Sui Tang
- Abstract summary: We consider the nonlinear problem of learning a transition operator $mathbfA$ from partial observations at different times.
We show that no more than $mathcalOrn log(nT)$ space-time samples are sufficient to ensure accurate recovery of a rank-$r$ operator.
- Score: 11.859913430860335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the nonlinear inverse problem of learning a transition operator
$\mathbf{A}$ from partial observations at different times, in particular from
sparse observations of entries of its powers
$\mathbf{A},\mathbf{A}^2,\cdots,\mathbf{A}^{T}$. This Spatio-Temporal
Transition Operator Recovery problem is motivated by the recent interest in
learning time-varying graph signals that are driven by graph operators
depending on the underlying graph topology. We address the nonlinearity of the
problem by embedding it into a higher-dimensional space of suitable
block-Hankel matrices, where it becomes a low-rank matrix completion problem,
even if $\mathbf{A}$ is of full rank. For both a uniform and an adaptive random
space-time sampling model, we quantify the recoverability of the transition
operator via suitable measures of incoherence of these block-Hankel embedding
matrices. For graph transition operators these measures of incoherence depend
on the interplay between the dynamics and the graph topology. We develop a
suitable non-convex iterative reweighted least squares (IRLS) algorithm,
establish its quadratic local convergence, and show that, in optimal scenarios,
no more than $\mathcal{O}(rn \log(nT))$ space-time samples are sufficient to
ensure accurate recovery of a rank-$r$ operator $\mathbf{A}$ of size $n \times
n$. This establishes that spatial samples can be substituted by a comparable
number of space-time samples. We provide an efficient implementation of the
proposed IRLS algorithm with space complexity of order $O(r n T)$ and
per-iteration time complexity linear in $n$. Numerical experiments for
transition operators based on several graph models confirm that the theoretical
findings accurately track empirical phase transitions, and illustrate the
applicability and scalability of the proposed algorithm.
Related papers
- Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - A Finite Expression Method for Solving High-Dimensional Committor Problems [5.748690310135373]
We explore the finite expression method (FEX) as a tool for computing the committor.
The FEX-based committor solver is tested on several high-dimensional benchmark problems.
arXiv Detail & Related papers (2023-06-21T13:43:59Z) - Orthogonal Directions Constrained Gradient Method: from non-linear
equality constraints to Stiefel manifold [16.099883128428054]
We propose a novel algorithm, the Orthogonal Directions Constrained Method (ODCGM)
ODCGM only requires computing a projection onto a vector space.
We show that ODCGM exhibits the near-optimal oracle complexities.
arXiv Detail & Related papers (2023-03-16T12:25:53Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Optimal Randomized Approximations for Matrix based Renyi's Entropy [16.651155375441796]
We develop random approximations for integer order $alpha$ cases and series approximations for non-integer $alpha$ cases.
Large-scale simulations and real-world applications validate the effectiveness of the developed approximations.
arXiv Detail & Related papers (2022-05-16T02:24:52Z) - Linear-Time Gromov Wasserstein Distances using Low Rank Couplings and
Costs [45.87981728307819]
The ability to compare and align related datasets living in heterogeneous spaces plays an increasingly important role in machine learning.
The Gromov-Wasserstein (GW) formalism can help tackle this problem.
arXiv Detail & Related papers (2021-06-02T12:50:56Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Phase retrieval in high dimensions: Statistical and computational phase
transitions [27.437775143419987]
We consider the problem of reconstructing a $mathbfXstar$ from $m$ (possibly noisy) observations.
In particular, the information-theoretic transition to perfect recovery for full-rank matrices appears at $alpha=1$ and $alpha=2$.
Our work provides an extensive classification of the statistical and algorithmic thresholds in high-dimensional phase retrieval.
arXiv Detail & Related papers (2020-06-09T13:03:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.