On Outer Bi-Lipschitz Extensions of Linear Johnson-Lindenstrauss
Embeddings of Low-Dimensional Submanifolds of $\mathbb{R}^N$
- URL: http://arxiv.org/abs/2206.03376v1
- Date: Tue, 7 Jun 2022 15:10:46 GMT
- Title: On Outer Bi-Lipschitz Extensions of Linear Johnson-Lindenstrauss
Embeddings of Low-Dimensional Submanifolds of $\mathbb{R}^N$
- Authors: Mark A. Iwen, Mark Philip Roach
- Abstract summary: Let $mathcalM$ be a compact $d$-dimensional submanifold of $mathbbRN$ with reach $tau$ and volume $V_mathcal M$.
We prove that a nonlinear function $f: mathbbRN rightarrow mathbbRmm exists with $m leq C left(d / epsilon2right) log left(fracsqrt[d]V_math
- Score: 0.24366811507669117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Let $\mathcal{M}$ be a compact $d$-dimensional submanifold of $\mathbb{R}^N$
with reach $\tau$ and volume $V_{\mathcal M}$. Fix $\epsilon \in (0,1)$. In
this paper we prove that a nonlinear function $f: \mathbb{R}^N \rightarrow
\mathbb{R}^{m}$ exists with $m \leq C \left(d / \epsilon^2 \right) \log
\left(\frac{\sqrt[d]{V_{\mathcal M}}}{\tau} \right)$ such that $$(1 - \epsilon)
\| {\bf x} - {\bf y} \|_2 \leq \left\| f({\bf x}) - f({\bf y}) \right\|_2 \leq
(1 + \epsilon) \| {\bf x} - {\bf y} \|_2$$ holds for all ${\bf x} \in
\mathcal{M}$ and ${\bf y} \in \mathbb{R}^N$. In effect, $f$ not only serves as
a bi-Lipschitz function from $\mathcal{M}$ into $\mathbb{R}^{m}$ with
bi-Lipschitz constants close to one, but also approximately preserves all
distances from points not in $\mathcal{M}$ to all points in $\mathcal{M}$ in
its image. Furthermore, the proof is constructive and yields an algorithm which
works well in practice. In particular, it is empirically demonstrated herein
that such nonlinear functions allow for more accurate compressive nearest
neighbor classification than standard linear Johnson-Lindenstrauss embeddings
do in practice.
Related papers
- The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - A class of ternary codes with few weights [0.0]
In this paper, we investigate a ternary code $mathcalC$ of length $n$, defined by $mathcalC$ := (textTr) := (textTr(dx), dots, dots, d_n$.
Using recent results on explicit evaluations of exponential sums, we determine the Weil bound, and techniques, we show that the dual code of $mathcalC$ is optimal with respect to the Hamming bound.
arXiv Detail & Related papers (2024-10-05T16:15:50Z) - Efficient Continual Finite-Sum Minimization [52.5238287567572]
We propose a key twist into the finite-sum minimization, dubbed as continual finite-sum minimization.
Our approach significantly improves upon the $mathcalO(n/epsilon)$ FOs that $mathrmStochasticGradientDescent$ requires.
We also prove that there is no natural first-order method with $mathcalOleft(n/epsilonalpharight)$ complexity gradient for $alpha 1/4$, establishing that the first-order complexity of our method is nearly tight.
arXiv Detail & Related papers (2024-06-07T08:26:31Z) - Noncompact uniform universal approximation [0.0]
The universal approximation theorem is generalised to uniform convergence on the (noncompact) input space $mathbbRn$.
All continuous functions that vanish at infinity can be uniformly approximated by neural networks.
arXiv Detail & Related papers (2023-08-07T08:54:21Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Online Learning of Smooth Functions [0.35534933448684125]
We study the online learning of real-valued functions where the hidden function is known to have certain smoothness properties.
We find new bounds for $textopt_p(mathcal F_q)$ that are sharp up to a constant factor.
In the multi-variable setup, we establish inequalities relating $textopt_p(mathcal F_q,d)$ to $textopt_p(mathcal F_q,d)$ and show that $textopt_p(mathcal F
arXiv Detail & Related papers (2023-01-04T04:05:58Z) - Convergence Rates of Stochastic Zeroth-order Gradient Descent for \L
ojasiewicz Functions [6.137707924685666]
We prove convergence rates of Zeroth-order Gradient Descent (SZGD) algorithms for Lojasiewicz functions.
Our results show that $ f (mathbfx_t) - f (mathbfx_infty) _t in mathbbN $ can converge faster than $ | mathbfx_infty.
arXiv Detail & Related papers (2022-10-31T00:53:17Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Linear Bandits on Uniformly Convex Sets [88.3673525964507]
Linear bandit algorithms yield $tildemathcalO(nsqrtT)$ pseudo-regret bounds on compact convex action sets.
Two types of structural assumptions lead to better pseudo-regret bounds.
arXiv Detail & Related papers (2021-03-10T07:33:03Z) - A Canonical Transform for Strengthening the Local $L^p$-Type Universal
Approximation Property [4.18804572788063]
$Lp$-type universal approximation theorems guarantee that a given machine learning model class $mathscrFsubseteq C(mathbbRd,mathbbRD)$ is dense in $Lp_mu(mathbbRd,mathbbRD)$.
This paper proposes a generic solution to this approximation theoretic problem by introducing a canonical transformation which "upgrades $mathscrF$'s approximation property"
arXiv Detail & Related papers (2020-06-24T17:46:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.