Hypergraph $p$-Laplacian regularization on point clouds for data interpolation
- URL: http://arxiv.org/abs/2405.01109v1
- Date: Thu, 2 May 2024 09:17:32 GMT
- Title: Hypergraph $p$-Laplacian regularization on point clouds for data interpolation
- Authors: Kehan Shi, Martin Burger,
- Abstract summary: Hypergraphs are widely used to model higher-order relations in data.
We define the $varepsilon_n$-ball hypergraph and the $k_n$-nearest neighbor hypergraph on a point cloud.
We prove the variational consistency between the hypergraph $p$-Laplacian regularization and the $p$-Laplacian regularization in a semi-supervised setting.
- Score: 3.79830302036482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a generalization of graphs, hypergraphs are widely used to model higher-order relations in data. This paper explores the benefit of the hypergraph structure for the interpolation of point cloud data that contain no explicit structural information. We define the $\varepsilon_n$-ball hypergraph and the $k_n$-nearest neighbor hypergraph on a point cloud and study the $p$-Laplacian regularization on the hypergraphs. We prove the variational consistency between the hypergraph $p$-Laplacian regularization and the continuum $p$-Laplacian regularization in a semisupervised setting when the number of points $n$ goes to infinity while the number of labeled points remains fixed. A key improvement compared to the graph case is that the results rely on weaker assumptions on the upper bound of $\varepsilon_n$ and $k_n$. To solve the convex but non-differentiable large-scale optimization problem, we utilize the stochastic primal-dual hybrid gradient algorithm. Numerical experiments on data interpolation verify that the hypergraph $p$-Laplacian regularization outperforms the graph $p$-Laplacian regularization in preventing the development of spikes at the labeled points.
Related papers
- Detection of Dense Subhypergraphs by Low-Degree Polynomials [72.4451045270967]
Detection of a planted dense subgraph in a random graph is a fundamental statistical and computational problem.
We consider detecting the presence of a planted $Gr(ngamma, n-alpha)$ subhypergraph in a $Gr(n, n-beta) hypergraph.
Our results are already new in the graph case $r=2$, as we consider the subtle log-density regime where hardness based on average-case reductions is not known.
arXiv Detail & Related papers (2023-04-17T10:38:08Z) - Rates of Convergence for Regression with the Graph Poly-Laplacian [3.222802562733786]
Higher order regularity can be obtained via replacing the Laplacian regulariser with a poly-Laplacian regulariser.
We consider graph poly-Laplacian regularisation in a fully supervised, non-parametric, noise corrupted, regression problem.
arXiv Detail & Related papers (2022-09-06T08:59:15Z) - Efficient Signed Graph Sampling via Balancing & Gershgorin Disc Perfect
Alignment [51.74913666829224]
We show that for datasets with strong inherent anti-correlations, a suitable graph contains both positive and negative edge weights.
We propose a linear-time signed graph sampling method centered on the concept of balanced signed graphs.
Experimental results show that our signed graph sampling method outperformed existing fast sampling schemes noticeably on various datasets.
arXiv Detail & Related papers (2022-08-18T09:19:01Z) - Bi-stochastically normalized graph Laplacian: convergence to manifold Laplacian and robustness to outlier noise [10.418647759223965]
Bi-stochastic normalization provides an alternative normalization of graph Laplacians in graph-based data analysis.
We prove the convergence of bi-stochastically normalized graph Laplacian to manifold (weighted-)Laplacian with rates.
When the manifold data are corrupted by outlier noise, we theoretically prove the graph Laplacian point-wise consistency.
arXiv Detail & Related papers (2022-06-22T21:08:24Z) - Core-periphery Models for Hypergraphs [0.0]
We introduce a random hypergraph model for core-periphery structure.
We develop a novel statistical inference algorithm that is able to scale to large hypergraphs with runtime that is practically linear wrt.
Our inference algorithm is capable of learning embeddings that correspond to the reputation (rank) of a node within the hypergraph.
arXiv Detail & Related papers (2022-06-01T22:11:44Z) - AnchorGAE: General Data Clustering via $O(n)$ Bipartite Graph
Convolution [79.44066256794187]
We show how to convert a non-graph dataset into a graph by introducing the generative graph model, which is used to build graph convolution networks (GCNs)
A bipartite graph constructed by anchors is updated dynamically to exploit the high-level information behind data.
We theoretically prove that the simple update will lead to degeneration and a specific strategy is accordingly designed.
arXiv Detail & Related papers (2021-11-12T07:08:13Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Recurrently Predicting Hypergraphs [30.092688729343678]
A problem arises from the number of possible multi-way relationships, or hyperedges, scaling in $mathcalO(2n)$ for a set of $n$ elements.
We propose a recurrent hypergraph neural network that predicts the incidence matrix by iteratively refining an initial guess of the solution.
arXiv Detail & Related papers (2021-06-26T01:12:41Z) - Projection-free Graph-based Classifier Learning using Gershgorin Disc
Perfect Alignment [59.87663954467815]
In graph-based binary learning, a subset of known labels $hatx_i$ are used to infer unknown labels.
When restricting labels $x_i$ to binary values, the problem is NP-hard.
We propose a fast projection-free method by solving a sequence of linear programs (LP) instead.
arXiv Detail & Related papers (2021-06-03T07:22:48Z) - Accelerated Gradient Tracking over Time-varying Graphs for Decentralized Optimization [59.65871549878937]
We prove that the practical single loop accelerated gradient tracking needs $O(fracgamma1-sigma_gamma)2sqrtfracLepsilon)$.
Our convergence rates improve significantly over the ones of $O(frac1epsilon5/7)$ and $O(fracLmu)5/7frac1 (1-sigma)1.5logfrac1epsilon)$.
arXiv Detail & Related papers (2021-04-06T15:34:14Z) - Semi-supervised Hypergraph Node Classification on Hypergraph Line
Expansion [7.933465724913661]
We propose a new hypergraph formulation named the emphline expansion (LE) for hypergraphs learning.
The proposed emphline expansion makes existing graph learning algorithms compatible with the higher-order structure.
We evaluate the proposed line expansion on five hypergraph datasets, the results show that our method beats SOTA baselines by a significant margin.
arXiv Detail & Related papers (2020-05-11T03:02:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.