Localization in 1D non-parametric latent space models from pairwise
affinities
- URL: http://arxiv.org/abs/2108.03098v2
- Date: Fri, 11 Aug 2023 14:46:40 GMT
- Title: Localization in 1D non-parametric latent space models from pairwise
affinities
- Authors: Christophe Giraud and Yann Issartel and Nicolas Verzelen
- Abstract summary: We consider the problem of estimating latent positions in a one-dimensional torus from pairwise affinities.
We introduce an estimation procedure that provably localizes all the latent positions with a maximum error of the order of $sqrtlog(n)/n$, with high-probability.
- Score: 6.982738885923206
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of estimating latent positions in a one-dimensional
torus from pairwise affinities. The observed affinity between a pair of items
is modeled as a noisy observation of a function $f(x^*_{i},x^*_{j})$ of the
latent positions $x^*_{i},x^*_{j}$ of the two items on the torus. The affinity
function $f$ is unknown, and it is only assumed to fulfill some shape
constraints ensuring that $f(x,y)$ is large when the distance between $x$ and
$y$ is small, and vice-versa. This non-parametric modeling offers a good
flexibility to fit data. We introduce an estimation procedure that provably
localizes all the latent positions with a maximum error of the order of
$\sqrt{\log(n)/n}$, with high-probability. This rate is proven to be minimax
optimal. A computationally efficient variant of the procedure is also analyzed
under some more restrictive assumptions. Our general results can be
instantiated to the problem of statistical seriation, leading to new bounds for
the maximum error in the ordering.
Related papers
- Minimax Optimality of Score-based Diffusion Models: Beyond the Density Lower Bound Assumptions [11.222970035173372]
kernel-based score estimator achieves an optimal mean square error of $widetildeOleft(n-1 t-fracd+22(tfracd2 vee 1)right)
We show that a kernel-based score estimator achieves an optimal mean square error of $widetildeOleft(n-1/2 t-fracd4right)$ upper bound for the total variation error of the distribution of the sample generated by the diffusion model under a mere sub-Gaussian
arXiv Detail & Related papers (2024-02-23T20:51:31Z) - Revisiting Inexact Fixed-Point Iterations for Min-Max Problems: Stochasticity and Structured Nonconvexity [18.427215139020632]
We show that $1L 1$ can be used to improve some state-of-the-art problems even for a multilevel Carlo iteration.
We provide an analysis for inexact Halperness estimators for $1L 1$ when the only hold with respect to a solution is a new $1L 1$ theory.
arXiv Detail & Related papers (2024-02-07T18:22:41Z) - The Sample Complexity Of ERMs In Stochastic Convex Optimization [13.896417716930687]
We show that in fact $tildeO(fracdepsilon+frac1epsilon2)$ data points are also sufficient.
We further generalize the result and show that a similar upper bound holds for all convex bodies.
arXiv Detail & Related papers (2023-11-09T14:29:25Z) - Estimating the minimizer and the minimum value of a regression function
under passive design [72.85024381807466]
We propose a new method for estimating the minimizer $boldsymbolx*$ and the minimum value $f*$ of a smooth and strongly convex regression function $f$.
We derive non-asymptotic upper bounds for the quadratic risk and optimization error of $boldsymbolz_n$, and for the risk of estimating $f*$.
arXiv Detail & Related papers (2022-11-29T18:38:40Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - A first-order primal-dual method with adaptivity to local smoothness [64.62056765216386]
We consider the problem of finding a saddle point for the convex-concave objective $min_x max_y f(x) + langle Ax, yrangle - g*(y)$, where $f$ is a convex function with locally Lipschitz gradient and $g$ is convex and possibly non-smooth.
We propose an adaptive version of the Condat-Vu algorithm, which alternates between primal gradient steps and dual steps.
arXiv Detail & Related papers (2021-10-28T14:19:30Z) - Inductive Mutual Information Estimation: A Convex Maximum-Entropy Copula
Approach [0.5330240017302619]
We propose a novel estimator of the mutual information between two ordinal vectors $x$ and $y$.
We prove that, so long as the constraint is feasible, this problem admits a unique solution, it is in the exponential family, and it can be learned by solving a convex optimization problem.
We show that our approach may be used to mitigate mode collapse in GANs by maximizing the entropy of the copula of fake samples.
arXiv Detail & Related papers (2021-02-25T21:21:40Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - The Complexity of Constrained Min-Max Optimization [29.57458485068705]
We show that an approximate local point large enough min-max is guaranteed to exist.
More importantly, we show an approximate fixed gradient Descent/Ascent approximation complete.
Our result is the first to show an exponential approximation of two fundamental optimization problems.
arXiv Detail & Related papers (2020-09-21T05:54:12Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.