Phase Transitions in Rate Distortion Theory and Deep Learning
- URL: http://arxiv.org/abs/2008.01011v1
- Date: Mon, 3 Aug 2020 16:48:49 GMT
- Title: Phase Transitions in Rate Distortion Theory and Deep Learning
- Authors: Philipp Grohs, Andreas Klotz, Felix Voigtlaender
- Abstract summary: We say that $mathcalS$ can be compressed at rate $s$ if we can achieve an error of $mathcalO(R-s)$ for encoding $mathcalS$.
We show that for certain "nice" signal classes $mathcalS$, a phase transition occurs: We construct a probability measure $mathbbP$ on $mathcalS$.
- Score: 5.145741425164946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rate distortion theory is concerned with optimally encoding a given signal
class $\mathcal{S}$ using a budget of $R$ bits, as $R\to\infty$. We say that
$\mathcal{S}$ can be compressed at rate $s$ if we can achieve an error of
$\mathcal{O}(R^{-s})$ for encoding $\mathcal{S}$; the supremal compression rate
is denoted $s^\ast(\mathcal{S})$. Given a fixed coding scheme, there usually
are elements of $\mathcal{S}$ that are compressed at a higher rate than
$s^\ast(\mathcal{S})$ by the given coding scheme; we study the size of this set
of signals. We show that for certain "nice" signal classes $\mathcal{S}$, a
phase transition occurs: We construct a probability measure $\mathbb{P}$ on
$\mathcal{S}$ such that for every coding scheme $\mathcal{C}$ and any $s
>s^\ast(\mathcal{S})$, the set of signals encoded with error
$\mathcal{O}(R^{-s})$ by $\mathcal{C}$ forms a $\mathbb{P}$-null-set. In
particular our results apply to balls in Besov and Sobolev spaces that embed
compactly into $L^2(\Omega)$ for a bounded Lipschitz domain $\Omega$. As an
application, we show that several existing sharpness results concerning
function approximation using deep neural networks are generically sharp.
We also provide quantitative and non-asymptotic bounds on the probability
that a random $f\in\mathcal{S}$ can be encoded to within accuracy $\varepsilon$
using $R$ bits. This result is applied to the problem of approximately
representing $f\in\mathcal{S}$ to within accuracy $\varepsilon$ by a
(quantized) neural network that is constrained to have at most $W$ nonzero
weights and is generated by an arbitrary "learning" procedure. We show that for
any $s >s^\ast(\mathcal{S})$ there are constants $c,C$ such that, no matter how
we choose the "learning" procedure, the probability of success is bounded from
above by $\min\big\{1,2^{C\cdot W\lceil\log_2(1+W)\rceil^2
-c\cdot\varepsilon^{-1/s}}\big\}$.
Related papers
- The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - Statistical Query Lower Bounds for Learning Truncated Gaussians [43.452452030671694]
We show that the complexity of any SQ algorithm for this problem is $dmathrmpoly (1/epsilon)$, even when the class $mathcalC$ is simple so that $mathrmpoly(d/epsilon) samples information-theoretically suffice.
arXiv Detail & Related papers (2024-03-04T18:30:33Z) - Learned Nonlinear Predictor for Critically Sampled 3D Point Cloud
Attribute Compression [24.001318485207207]
We study 3D point cloud compression via a decoder approach.
In this paper, we study predicting $f_l*$ at level $l+1$ given $f_l*$ $l$ and encoding of $G_l*$ for the $p=1$ case.
arXiv Detail & Related papers (2023-11-22T17:26:54Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Local approximation of operators [0.0]
We study the problem of determining the degree of approximation of a non-linear operator between metric spaces $mathfrakX$ and $mathfrakY$.
We establish constructive methods to do this efficiently, i.e., with the constants involved in the estimates on the approximation on $mathbbSd$ being $mathcalO(d1/6)$.
arXiv Detail & Related papers (2022-02-13T19:28:34Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z) - Self-training Converts Weak Learners to Strong Learners in Mixture
Models [86.7137362125503]
We show that a pseudolabeler $boldsymbolbeta_mathrmpl$ can achieve classification error at most $C_mathrmerr$.
We additionally show that by running gradient descent on the logistic loss one can obtain a pseudolabeler $boldsymbolbeta_mathrmpl$ with classification error $C_mathrmerr$ using only $O(d)$ labeled examples.
arXiv Detail & Related papers (2021-06-25T17:59:16Z) - Robust Gaussian Covariance Estimation in Nearly-Matrix Multiplication
Time [14.990725929840892]
We show an algorithm that runs in time $widetildeO(T(N, d) log kappa / mathrmpoly (varepsilon))$, where $T(N, d)$ is the time it takes to multiply a $d times N$ matrix by its transpose.
Our runtime matches that of the fastest algorithm for covariance estimation without outliers, up to poly-logarithmic factors.
arXiv Detail & Related papers (2020-06-23T20:21:27Z) - Agnostic Learning of a Single Neuron with Gradient Descent [92.7662890047311]
We consider the problem of learning the best-fitting single neuron as measured by the expected square loss.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
arXiv Detail & Related papers (2020-05-29T07:20:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.