Empirical Error Estimates for Graph Sparsification
- URL: http://arxiv.org/abs/2503.08031v1
- Date: Tue, 11 Mar 2025 04:28:57 GMT
- Title: Empirical Error Estimates for Graph Sparsification
- Authors: Siyao Wang, Miles E. Lopes,
- Abstract summary: Graph sparsification is a technique for accelerating graph-based learning algorithms.<n>Because the sparsification error is random and unknown, users must contend with uncertainty about the reliability of downstream computations.<n>We propose to address these issues by computing empirical error estimates.
- Score: 3.6095388702618414
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph sparsification is a well-established technique for accelerating graph-based learning algorithms, which uses edge sampling to approximate dense graphs with sparse ones. Because the sparsification error is random and unknown, users must contend with uncertainty about the reliability of downstream computations. Although it is possible for users to obtain conceptual guidance from theoretical error bounds in the literature, such results are typically impractical at a numerical level. Taking an alternative approach, we propose to address these issues from a data-driven perspective by computing empirical error estimates. The proposed error estimates are highly versatile, and we demonstrate this in four use cases: Laplacian matrix approximation, graph cut queries, graph-structured regression, and spectral clustering. Moreover, we provide two theoretical guarantees for the error estimates, and explain why the cost of computing them is manageable in comparison to the overall cost of a typical graph sparsification workflow.
Related papers
- Robust Graph-Based Semi-Supervised Learning via $p$-Conductances [49.0776396776252]
We study the problem of semi-supervised learning on graphs in the regime where data labels are scarce or possibly corrupted.<n>We propose an approach called $p$-conductance learning that generalizes the $p$-Laplace and Poisson learning methods.<n> Empirical results on computer vision and citation datasets demonstrate that our approach achieves state-of-the-art accuracy in low label-rate, corrupted-label, and partial-label regimes.
arXiv Detail & Related papers (2025-02-13T01:11:25Z) - Stochastic Variance-Reduced Iterative Hard Thresholding in Graph Sparsity Optimization [0.626226809683956]
We introduce two methods to solve gradient-based graph sparsity optimization: GraphRG-IHT and GraphSG-IHT.
We provide a general general for theoretical analysis, demonstrating that methods enjoy a gradient-based framework.
arXiv Detail & Related papers (2024-07-24T03:26:26Z) - Learning Latent Graph Structures and their Uncertainty [63.95971478893842]
Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy.
As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task.
arXiv Detail & Related papers (2024-05-30T10:49:22Z) - Network Topology Inference with Sparsity and Laplacian Constraints [18.447094648361453]
We tackle the network topology by utilizing Laplacian constrained Gaussian graphical models.
We show that an efficient projection algorithm is developed to solve the resulting problem.
arXiv Detail & Related papers (2023-09-02T15:06:30Z) - Efficient Graph Laplacian Estimation by Proximal Newton [12.05527862797306]
A graph learning problem can be formulated as a maximum likelihood estimation (MLE) of the precision matrix.
We develop a second-order approach to obtain an efficient solver utilizing several algorithmic features.
arXiv Detail & Related papers (2023-02-13T15:13:22Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Does the $\ell_1$-norm Learn a Sparse Graph under Laplacian Constrained
Graphical Models? [13.572602792770288]
We consider the problem of learning a graph under the Laplacian constrained Gaussian graphical models.
We show that a large regularization parameter will surprisingly lead to a complete graph, i.e., every edge connected by an edge.
arXiv Detail & Related papers (2020-06-26T12:06:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.