Hamilton-Jacobi equations on graphs with applications to semi-supervised
learning and data depth
- URL: http://arxiv.org/abs/2202.08789v1
- Date: Thu, 17 Feb 2022 17:50:55 GMT
- Title: Hamilton-Jacobi equations on graphs with applications to semi-supervised
learning and data depth
- Authors: Jeff Calder, Mahmood Ettehad
- Abstract summary: We study a family of Hamilton-Jacobi equations on graphs that we call the $p$-eikonal equation.
We show that the $p$-eikonal equation with $p=1$ is a provably robust distance-type function on a graph.
We consider applications of the $p$-eikonal equation to data depth and semi-supervised learning, and use the continuum limit to prove consistency results for both applications.
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shortest path graph distances are widely used in data science and machine
learning, since they can approximate the underlying geodesic distance on the
data manifold. However, the shortest path distance is highly sensitive to the
addition of corrupted edges in the graph, either through noise or an
adversarial perturbation. In this paper we study a family of Hamilton-Jacobi
equations on graphs that we call the $p$-eikonal equation. We show that the
$p$-eikonal equation with $p=1$ is a provably robust distance-type function on
a graph, and the $p\to \infty$ limit recovers shortest path distances. While
the $p$-eikonal equation does not correspond to a shortest-path graph distance,
we nonetheless show that the continuum limit of the $p$-eikonal equation on a
random geometric graph recovers a geodesic density weighted distance in the
continuum. We consider applications of the $p$-eikonal equation to data depth
and semi-supervised learning, and use the continuum limit to prove asymptotic
consistency results for both applications. Finally, we show the results of
experiments with data depth and semi-supervised learning on real image
datasets, including MNIST, FashionMNIST and CIFAR-10, which show that the
$p$-eikonal equation offers significantly better results compared to shortest
path distances.
Related papers
- Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$ under isotropic Gaussian data.
We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ of arbitrary link function with a sample and runtime complexity of $n asymp T asymp C(q) cdot d
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Continuum limit of $p$-biharmonic equations on graphs [3.79830302036482]
The behavior of the solution is investigated when the random geometric graph is considered and the number of data points goes to infinity.
We show that the continuum limit is an appropriately weighted $p$-biharmonic equation with homogeneous Neumann boundary conditions.
arXiv Detail & Related papers (2024-04-30T16:29:44Z) - All You Need is Resistance: On the Equivalence of Effective Resistance and Certain Optimal Transport Problems on Graphs [48.84819106277247]
We claim that effective resistance and optimal transport on graphs should be understood as one and the same, up to a choice of $p$.
We show explicit connections to optimal stopping times and random walks on graphs, graph Sobolev spaces, and a Benamou-Brenier type formula for $2$-Beckmann distance.
We propose further study of the usage of these metrics where Wasserstein distance may produce computational bottlenecks.
arXiv Detail & Related papers (2024-04-23T17:50:52Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Graph Mover's Distance: An Efficiently Computable Distance Measure for
Geometric Graphs [0.0]
The geometric graph distance (GGD) has recently been studied as a meaningful measure of similarity between two geometric graphs.
We propose in this paper the Graph Mover's Distance (GMD) which has been formulated as an instance of the earth mover's distance.
arXiv Detail & Related papers (2023-06-03T15:06:12Z) - Tight and fast generalization error bound of graph embedding in metric
space [54.279425319381374]
We show that graph embedding in non-Euclidean metric space can outperform that in Euclidean space with much smaller training data than the existing bound has suggested.
Our new upper bound is significantly tighter and faster than the existing one, which can be exponential to $R$ and $O(frac1S)$ at the fastest.
arXiv Detail & Related papers (2023-05-13T17:29:18Z) - Efficient Signed Graph Sampling via Balancing & Gershgorin Disc Perfect
Alignment [51.74913666829224]
We show that for datasets with strong inherent anti-correlations, a suitable graph contains both positive and negative edge weights.
We propose a linear-time signed graph sampling method centered on the concept of balanced signed graphs.
Experimental results show that our signed graph sampling method outperformed existing fast sampling schemes noticeably on various datasets.
arXiv Detail & Related papers (2022-08-18T09:19:01Z) - Condensing Graphs via One-Step Gradient Matching [50.07587238142548]
We propose a one-step gradient matching scheme, which performs gradient matching for only one single step without training the network weights.
Our theoretical analysis shows this strategy can generate synthetic graphs that lead to lower classification loss on real graphs.
In particular, we are able to reduce the dataset size by 90% while approximating up to 98% of the original performance.
arXiv Detail & Related papers (2022-06-15T18:20:01Z) - Multiscale Graph Comparison via the Embedded Laplacian Distance [6.09170287691728]
We propose the Embedded Laplacian Distance (ELD) for comparing graphs of vastly different sizes.
Our approach first projects the graphs onto a common, low-dimensional Laplacian embedding space that respects graphical structure.
A distance can then be computed efficiently via a natural sliced Wasserstein approach.
arXiv Detail & Related papers (2022-01-28T12:13:08Z) - Continuum Limit of Lipschitz Learning on Graphs [0.0]
We prove continuum limits of Lipschitz learning using $Gamma$-convergence.
We show compactness of functionals which implies convergence of minimizers.
arXiv Detail & Related papers (2020-12-07T15:10:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.