Scalable Verification of GNN-based Job Schedulers
- URL: http://arxiv.org/abs/2203.03153v1
- Date: Mon, 7 Mar 2022 06:13:04 GMT
- Title: Scalable Verification of GNN-based Job Schedulers
- Authors: Haoze Wu, Clark Barrett, Mahmood Sharif, Nina Narodytska, Gagandeep
Singh
- Abstract summary: Graph Neural Networks (GNNs) have been applied for scheduling jobs over achieving better performance than hand-crafted clusters.
We develop GNN-Verify, the first general framework for verifying both single-step and multi-step properties of these schedulers.
- Score: 16.7289491091472
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs
over clusters achieving better performance than hand-crafted heuristics.
Despite their impressive performance, concerns remain over their
trustworthiness when deployed in a real-world environment due to their
black-box nature. To address these limitations, we consider formal verification
of their expected properties such as strategy proofness and locality in this
work. We address several domain-specific challenges such as deeper networks and
richer specifications not encountered by existing verifiers for image and NLP
classifiers. We develop GNN-Verify, the first general framework for verifying
both single-step and multi-step properties of these schedulers based on
carefully designed algorithms that combine abstractions, refinements, solvers,
and proof transfer. Our experimental results on challenging benchmarks show
that our approach can provide precise and scalable formal guarantees on the
trustworthiness of state-of-the-art GNN-based scheduler.
Related papers
- Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Accurate and Scalable Estimation of Epistemic Uncertainty for Graph
Neural Networks [40.95782849532316]
We propose a novel training framework designed to improve intrinsic GNN uncertainty estimates.
Our framework adapts the principle of centering data to graph data through novel graph anchoring strategies.
Our work provides insights into uncertainty estimation for GNNs, and demonstrates the utility of G-$Delta$UQ in obtaining reliable estimates.
arXiv Detail & Related papers (2024-01-07T00:58:33Z) - VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees [3.208888890455612]
We propose a novel framework to generate Verification-Friendly Neural Networks (VNNs)
We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness.
arXiv Detail & Related papers (2023-12-15T12:39:27Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - ReFactorGNNs: Revisiting Factorisation-based Models from a
Message-Passing Perspective [42.845783579293]
We bridge the gap between Factorisation-based Models (FMs) and Graph Neural Networks (GNNs) by proposing ReFactorGNNs.
We show how FMs can be cast as GNNs by reformulating the gradient descent procedure as message-passing operations.
Our ReFactorGNNs achieve comparable transductive performance to FMs, and state-of-the-art inductive performance while using an order of magnitude fewer parameters.
arXiv Detail & Related papers (2022-07-20T15:39:30Z) - Superiority of GNN over NN in generalizing bandlimited functions [6.3151583550712065]
Graph Neural Networks (GNNs) have emerged as formidable resources for processing graph-based information across diverse applications.
In this study, we investigate the proficiency of GNNs for such classifications, which can also be cast as a function problem.
Our findings highlight a pronounced efficiency in utilizing GNNs to generalize a bandlimited function within an $varepsilon$-error margin.
arXiv Detail & Related papers (2022-06-13T05:15:12Z) - Robust Graph Neural Networks via Probabilistic Lipschitz Constraints [7.359962178534361]
Graph neural networks (GNNs) have recently been demonstrated to perform well on a variety of network-based tasks.
GNNs are susceptible to shifts and perturbations on their inputs, which can include both node attributes and graph structure.
arXiv Detail & Related papers (2021-12-14T17:33:32Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks [96.93500886136532]
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
arXiv Detail & Related papers (2020-02-25T15:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.