Continuum Limit of Lipschitz Learning on Graphs
- URL: http://arxiv.org/abs/2012.03772v2
- Date: Wed, 3 Feb 2021 17:07:44 GMT
- Title: Continuum Limit of Lipschitz Learning on Graphs
- Authors: Tim Roith, Leon Bungert
- Abstract summary: We prove continuum limits of Lipschitz learning using $Gamma$-convergence.
We show compactness of functionals which implies convergence of minimizers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tackling semi-supervised learning problems with graph-based methods have
become a trend in recent years since graphs can represent all kinds of data and
provide a suitable framework for studying continuum limits, e.g., of
differential operators. A popular strategy here is $p$-Laplacian learning,
which poses a smoothness condition on the sought inference function on the set
of unlabeled data. For $p<\infty$ continuum limits of this approach were
studied using tools from $\Gamma$-convergence. For the case $p=\infty$, which
is referred to as Lipschitz learning, continuum limits of the related
infinity-Laplacian equation were studied using the concept of viscosity
solutions.
In this work, we prove continuum limits of Lipschitz learning using
$\Gamma$-convergence. In particular, we define a sequence of functionals which
approximate the largest local Lipschitz constant of a graph function and prove
$\Gamma$-convergence in the $L^\infty$-topology to the supremum norm of the
gradient as the graph becomes denser. Furthermore, we show compactness of the
functionals which implies convergence of minimizers. In our analysis we allow a
varying set of labeled data which converges to a general closed set in the
Hausdorff distance. We apply our results to nonlinear ground states and, as a
by-product, prove convergence of graph distance functions to geodesic distance
functions.
Related papers
- Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$ under isotropic Gaussian data.
We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ of arbitrary link function with a sample and runtime complexity of $n asymp T asymp C(q) cdot d
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Continuum limit of $p$-biharmonic equations on graphs [3.79830302036482]
The behavior of the solution is investigated when the random geometric graph is considered and the number of data points goes to infinity.
We show that the continuum limit is an appropriately weighted $p$-biharmonic equation with homogeneous Neumann boundary conditions.
arXiv Detail & Related papers (2024-04-30T16:29:44Z) - Tight and fast generalization error bound of graph embedding in metric
space [54.279425319381374]
We show that graph embedding in non-Euclidean metric space can outperform that in Euclidean space with much smaller training data than the existing bound has suggested.
Our new upper bound is significantly tighter and faster than the existing one, which can be exponential to $R$ and $O(frac1S)$ at the fastest.
arXiv Detail & Related papers (2023-05-13T17:29:18Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - $\alpha$-GAN: Convergence and Estimation Guarantees [7.493779672689531]
We prove a correspondence between the min-max optimization of general CPE loss function GANs and the minimization of associated $f$-divergences.
We then focus on $alpha$-GAN, defined via the $alpha$-loss, which interpolates several GANs and corresponds to the minimization of the Arimoto divergence.
arXiv Detail & Related papers (2022-05-12T23:26:51Z) - Hamilton-Jacobi equations on graphs with applications to semi-supervised
learning and data depth [1.52292571922932]
We study a family of Hamilton-Jacobi equations on graphs that we call the $p$-eikonal equation.
We show that the $p$-eikonal equation with $p=1$ is a provably robust distance-type function on a graph.
We consider applications of the $p$-eikonal equation to data depth and semi-supervised learning, and use the continuum limit to prove consistency results for both applications.
arXiv Detail & Related papers (2022-02-17T17:50:55Z) - Uniform Convergence Rates for Lipschitz Learning on Graphs [1.9014535120129339]
Lipschitz learning is a graph-based semi-supervised learning method.
We prove uniform convergence rates for solutions of the graph infinity Laplace equation.
arXiv Detail & Related papers (2021-11-24T09:44:14Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Proximal Gradient Descent-Ascent: Variable Convergence under K{\L}
Geometry [49.65455534654459]
The finite descent-ascent parameters (GDA) has been widely applied to solve minimax optimization problems.
This paper fills such a gap by studying the convergence of the KL-Lized geometry.
arXiv Detail & Related papers (2021-02-09T05:35:53Z) - Lipschitz regularity of graph Laplacians on random data clouds [1.2891210250935146]
We prove high probability interior and global Lipschitz estimates for solutions of graph Poisson equations.
Our results can be used to show that graph Laplacian eigenvectors are, with high probability, essentially Lipschitz regular with constants depending explicitly on their corresponding eigenvalues.
arXiv Detail & Related papers (2020-07-13T20:43:19Z) - Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions [84.49087114959872]
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonsmooth functions.
In particular, we study Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions.
arXiv Detail & Related papers (2020-02-10T23:23:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.