Information-Theoretic Guarantees for Recovering Low-Rank Tensors from Symmetric Rank-One Measurements
- URL: http://arxiv.org/abs/2502.05134v1
- Date: Fri, 07 Feb 2025 18:12:32 GMT
- Title: Information-Theoretic Guarantees for Recovering Low-Rank Tensors from Symmetric Rank-One Measurements
- Authors: Eren C. Kızıldağ,
- Abstract summary: We investigate the sample complexity of recovering tensors with low symmetric rank from symmetric rank-one measurements.
We provide a sample complexity based on Fano's inequality, and discuss broader implications for our results for two-layer networks.
- Score: 0.0
- License:
- Abstract: In this paper, we investigate the sample complexity of recovering tensors with low symmetric rank from symmetric rank-one measurements. This setting is particularly motivated by the study of higher-order interactions and the analysis of two-layer neural networks with polynomial activations (polynomial networks). Using a covering numbers argument, we analyze the performance of the symmetric rank minimization program and establish near-optimal sample complexity bounds when the underlying distribution is log-concave. Our measurement model involves random symmetric rank-one tensors, which lead to involved probability calculations. To address these challenges, we employ the Carbery-Wright inequality, a powerful tool for studying anti-concentration properties of random polynomials, and leverage orthogonal polynomials. Additionally, we provide a sample complexity lower bound based on Fano's inequality, and discuss broader implications of our results for two-layer polynomial networks.
Related papers
- Invariant Kernels: Rank Stabilization and Generalization Across Dimensions [7.154556482351604]
We show that symmetry has a pronounced impact on the rank of kernel matrices.
Specifically, we compute the rank of a kernel of fixed degree that is invariant under various groups acting independently on its two arguments.
In concrete circumstances, including the three aforementioned examples, symmetry dramatically decreases the rank making it independent of the data dimension.
arXiv Detail & Related papers (2025-02-03T23:37:43Z) - Predicting symmetries of quantum dynamics with optimal samples [41.42817348756889]
Identifying symmetries in quantum dynamics is a crucial challenge with profound implications for quantum technologies.
We introduce a unified framework combining group representation theory and subgroup hypothesis testing to predict these symmetries with optimal efficiency.
We prove that parallel strategies achieve the same performance as adaptive or indefinite-causal-order protocols.
arXiv Detail & Related papers (2025-02-03T15:57:50Z) - Scheme to Detect the Strong-to-weak Symmetry Breaking via Randomized Measurements [11.213906010203264]
Recent developments have highlighted a novel symmetry-breaking pattern.
Strong-to-weak symmetry breaking is typically detected using multi-replica correlation functions, such as the R'enyi-2 correlator.
We propose a practical protocol for detecting strong-to-weak symmetry breaking in experiments using the randomized measurement toolbox.
arXiv Detail & Related papers (2024-12-24T12:41:38Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Mixed Matrix Completion in Complex Survey Sampling under Heterogeneous
Missingness [6.278498348219109]
We propose a fast and scalable estimation algorithm that achieves sublinear convergence.
The proposed method is applied to analyze the National Health and Nutrition Examination Survey data.
arXiv Detail & Related papers (2024-02-06T12:26:58Z) - Symmetry Breaking and Equivariant Neural Networks [17.740760773905986]
We introduce a novel notion of'relaxed equiinjection'
We show how to incorporate this relaxation into equivariant multilayer perceptronrons (E-MLPs)
The relevance of symmetry breaking is then discussed in various application domains.
arXiv Detail & Related papers (2023-12-14T15:06:48Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer
Linear Networks [51.1848572349154]
neural network models that perfectly fit noisy data can generalize well to unseen test data.
We consider interpolating two-layer linear neural networks trained with gradient flow on the squared loss and derive bounds on the excess risk.
arXiv Detail & Related papers (2021-08-25T22:01:01Z) - Stochastic Approximation for Online Tensorial Independent Component
Analysis [98.34292831923335]
Independent component analysis (ICA) has been a popular dimension reduction tool in statistical machine learning and signal processing.
In this paper, we present a by-product online tensorial algorithm that estimates for each independent component.
arXiv Detail & Related papers (2020-12-28T18:52:37Z) - Sampling asymmetric open quantum systems for artificial neural networks [77.34726150561087]
We present a hybrid sampling strategy which takes asymmetric properties explicitly into account, achieving fast convergence times and high scalability for asymmetric open systems.
We highlight the universal applicability of artificial neural networks, underlining the universal applicability of neural networks.
arXiv Detail & Related papers (2020-12-20T18:25:29Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.