Convergence bounds for nonlinear least squares and applications to
tensor recovery
- URL: http://arxiv.org/abs/2108.05237v1
- Date: Wed, 11 Aug 2021 14:14:02 GMT
- Title: Convergence bounds for nonlinear least squares and applications to
tensor recovery
- Authors: Philipp Trunschke
- Abstract summary: We consider the problem of approximating a function in general nonlinear subsets of $L2$ when only a weighted Monte Carlo estimate of the $L2$-norm can be computed.
A critical analysis of our results allows us to derive a sample efficient algorithm for the model set of low-rank tensors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of approximating a function in general nonlinear
subsets of $L^2$ when only a weighted Monte Carlo estimate of the $L^2$-norm
can be computed. Of particular interest in this setting is the concept of
sample complexity, the number of samples that are necessary to recover the best
approximation. Bounds for this quantity have been derived in a previous work
and depend primarily on the model class and are not influenced positively by
the regularity of the sought function. This result however is only a worst-case
bound and is not able to explain the remarkable performance of iterative hard
thresholding algorithms that is observed in practice. We reexamine the results
of the previous paper and derive a new bound that is able to utilize the
regularity of the sought function. A critical analysis of our results allows us
to derive a sample efficient algorithm for the model set of low-rank tensors.
The viability of this algorithm is demonstrated by recovering quantities of
interest for a classical high-dimensional random partial differential equation.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Efficient Algorithms for Regularized Nonnegative Scale-invariant Low-rank Approximation Models [3.6034001987137767]
We show that scale-invariance inherent to low-rank approximation models causes an implicit regularization with both unexpected beneficial and detrimental effects.
We derive a generic Majorization Minimization algorithm that handles many regularized nonnegative low-rank approximations.
We showcase our contributions on sparse Nonnegative Matrix Factorization, ridge-regularized Canonical Polyadic decomposition and sparse Nonnegative Tucker Decomposition.
arXiv Detail & Related papers (2024-03-27T12:49:14Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Off-policy estimation of linear functionals: Non-asymptotic theory for
semi-parametric efficiency [59.48096489854697]
The problem of estimating a linear functional based on observational data is canonical in both the causal inference and bandit literatures.
We prove non-asymptotic upper bounds on the mean-squared error of such procedures.
We establish its instance-dependent optimality in finite samples via matching non-asymptotic local minimax lower bounds.
arXiv Detail & Related papers (2022-09-26T23:50:55Z) - Tensor Recovery Based on A Novel Non-convex Function Minimax Logarithmic
Concave Penalty Function [5.264776812468168]
In this paper, we propose a new non-arithmic solution, Miniarithmic Concave Penalty (MLCP) function.
The proposed function is generalized to cases, weighted to $LLoja.
It is proved that the proposed sequence has finite length and converges to the critical point globally.
arXiv Detail & Related papers (2022-06-25T12:26:53Z) - Fine-grained Generalization Analysis of Vector-valued Learning [28.722350261462463]
We start the generalization analysis of regularized vector-valued learning algorithms by presenting bounds with a mild dependency on the output dimension and a fast rate on the sample size.
To understand the interaction between optimization and learning, we further use our results to derive the first bounds for descent with vector-valued functions.
As a byproduct, we derive a Rademacher complexity bound for loss function classes defined in terms of a general strongly convex function.
arXiv Detail & Related papers (2021-04-29T07:57:34Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Slice Sampling for General Completely Random Measures [74.24975039689893]
We present a novel Markov chain Monte Carlo algorithm for posterior inference that adaptively sets the truncation level using auxiliary slice variables.
The efficacy of the proposed algorithm is evaluated on several popular nonparametric models.
arXiv Detail & Related papers (2020-06-24T17:53:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.