Convergence bounds for nonlinear least squares for tensor recovery
- URL: http://arxiv.org/abs/2208.10954v1
- Date: Tue, 23 Aug 2022 13:25:30 GMT
- Title: Convergence bounds for nonlinear least squares for tensor recovery
- Authors: Philipp Trunschke
- Abstract summary: We consider the problem of approximating a function in general nonlinear subsets of L2 when only a weighted Monte Carlo estimate of the L2-norm can be computed.
By restricting the model class to a neighbourhood of the best approximation, we can derive improved worst-case bounds for the sample complexity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of approximating a function in general nonlinear
subsets of L2 when only a weighted Monte Carlo estimate of the L2-norm can be
computed. Of particular interest in this setting is the concept of sample
complexity, the number of sample points that are necessary to achieve a
prescribed error with high probability. Reasonable worst-case bounds for this
quantity exist only for particular subsets of L2, like linear spaces or sets of
sparse vectors. For more general subsets, like tensor networks, the currently
existing bounds are very pessimistic. By restricting the model class to a
neighbourhood of the best approximation, we can derive improved worst-case
bounds for the sample complexity. When the considered neighbourhood is a
manifold with positive local reach, the sample complexity can be estimated by
the sample complexity of the tangent space and the product of the sample
complexity of the normal space and the manifold's curvature.
Related papers
- Low Complexity Regularized Phase Retrieval [0.6522338519818377]
This regularizer is intended to promote solutions conforming to some notion of simplicity or low complexity.
We investigate both noiseless recovery and stability to noise and provide a very general and unified analysis framework.
arXiv Detail & Related papers (2024-07-23T11:58:08Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Polynomial-Time Solutions for ReLU Network Training: A Complexity
Classification via Max-Cut and Zonotopes [70.52097560486683]
We prove that the hardness of approximation of ReLU networks not only mirrors the complexity of the Max-Cut problem but also, in certain special cases, exactly corresponds to it.
In particular, when $epsilonleqsqrt84/83-1approx 0.006$, we show that it is NP-hard to find an approximate global dataset of the ReLU network objective with relative error $epsilon$ with respect to the objective value.
arXiv Detail & Related papers (2023-11-18T04:41:07Z) - The Complexity of Being Entangled [0.0]
Nielsen's approach to quantum state complexity relates the minimal number of quantum gates required to prepare a state to the length of geodesics computed with a certain norm on the manifold of unitary transformations.
For a bipartite system, we investigate binding complexity, which corresponds to norms in which gates acting on a single subsystem are free of cost.
arXiv Detail & Related papers (2023-11-07T19:00:02Z) - Sample Complexity Bounds for Estimating Probability Divergences under Invariances [31.946304450935628]
Group-invariant probability distributions appear in many data-generative models in machine learning.
In this work, we study how the inherent invariances, with respect to any smooth action of a Lie group on a manifold, improve sample complexity.
Results are completely new for groups of positive dimension and extend recent bounds for finite group actions.
arXiv Detail & Related papers (2023-11-06T04:45:21Z) - Lattice-Based Methods Surpass Sum-of-Squares in Clustering [98.46302040220395]
Clustering is a fundamental primitive in unsupervised learning.
Recent work has established lower bounds against the class of low-degree methods.
We show that, perhaps surprisingly, this particular clustering model textitdoes not exhibit a statistical-to-computational gap.
arXiv Detail & Related papers (2021-12-07T18:50:17Z) - Local versions of sum-of-norms clustering [77.34726150561087]
We show that our method can separate arbitrarily close balls in the ball model.
We prove a quantitative bound on the error incurred in the clustering of disjoint connected sets.
arXiv Detail & Related papers (2021-09-20T14:45:29Z) - Convergence bounds for nonlinear least squares and applications to
tensor recovery [0.0]
We consider the problem of approximating a function in general nonlinear subsets of $L2$ when only a weighted Monte Carlo estimate of the $L2$-norm can be computed.
A critical analysis of our results allows us to derive a sample efficient algorithm for the model set of low-rank tensors.
arXiv Detail & Related papers (2021-08-11T14:14:02Z) - Bilinear Classes: A Structural Framework for Provable Generalization in
RL [119.42509700822484]
Bilinear Classes is a new structural framework which permits generalization in reinforcement learning.
The framework incorporates nearly all existing models in which a sample complexity is achievable.
Our main result provides an RL algorithm which has sample complexity for Bilinear Classes.
arXiv Detail & Related papers (2021-03-19T16:34:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.