Anomaly Detection in Networks via Score-Based Generative Models
- URL: http://arxiv.org/abs/2306.15324v1
- Date: Tue, 27 Jun 2023 09:28:29 GMT
- Title: Anomaly Detection in Networks via Score-Based Generative Models
- Authors: Dmitrii Gavrilev, Evgeny Burnaev
- Abstract summary: node outlier detection in attributed graphs is a challenging problem for which there is no method that would work well across different datasets.
Motivated by the state-of-the-art results of score-based models in graph generative modeling, we propose to incorporate them into the aforementioned problem.
Our method achieves competitive results on small-scale graphs.
- Score: 14.95826212004679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Node outlier detection in attributed graphs is a challenging problem for
which there is no method that would work well across different datasets.
Motivated by the state-of-the-art results of score-based models in graph
generative modeling, we propose to incorporate them into the aforementioned
problem. Our method achieves competitive results on small-scale graphs. We
provide an empirical analysis of the Dirichlet energy, and show that generative
models might struggle to accurately reconstruct it.
Related papers
- Creating generalizable downstream graph models with random projections [22.690120515637854]
We investigate graph representation learning approaches that enable models to generalize across graphs.
We show that using random projections to estimate multiple powers of the transition matrix allows us to build a set of isomorphism-invariant features.
The resulting features can be used to recover enough information about the local neighborhood of a node to enable inference with relevance competitive to other approaches.
arXiv Detail & Related papers (2023-02-17T14:27:00Z) - Joint Network Topology Inference via a Shared Graphon Model [24.077455621015552]
We consider the problem of estimating the topology of multiple networks from nodal observations.
We adopt a graphon as our random graph model, which is a nonparametric model from which graphs of potentially different sizes can be drawn.
arXiv Detail & Related papers (2022-09-17T02:38:58Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Score matching enables causal discovery of nonlinear additive noise
models [63.93669924730725]
We show how to design a new generation of scalable causal discovery methods.
We propose a new efficient method for approximating the score's Jacobian, enabling to recover the causal graph.
arXiv Detail & Related papers (2022-03-08T21:34:46Z) - Score-based Generative Modeling of Graphs via the System of Stochastic
Differential Equations [57.15855198512551]
We propose a novel score-based generative model for graphs with a continuous-time framework.
We show that our method is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule.
arXiv Detail & Related papers (2022-02-05T08:21:04Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Adversarial Stein Training for Graph Energy Models [11.182263394122142]
We use an energy-based model (EBM) based on multi-channel graph neural networks (GNN) to learn permutation invariant unnormalized density functions on graphs.
We find that this approach achieves competitive results on graph generation compared to benchmark models.
arXiv Detail & Related papers (2021-08-30T03:55:18Z) - Stochastic Iterative Graph Matching [11.128153575173213]
We propose a new model, Iterative Graph MAtching, to address the graph matching problem.
Our model defines a distribution of matchings for a graph pair so the model can explore a wide range of possible matchings.
We conduct extensive experiments across synthetic graph datasets as well as biochemistry and computer vision applications.
arXiv Detail & Related papers (2021-06-04T02:05:35Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.