Hop-Count Based Self-Supervised Anomaly Detection on Attributed Networks
- URL: http://arxiv.org/abs/2104.07917v1
- Date: Fri, 16 Apr 2021 06:43:05 GMT
- Title: Hop-Count Based Self-Supervised Anomaly Detection on Attributed Networks
- Authors: Tianjin Huang, Yulong Pei, Vlado Menkovski and Mykola Pechenizkiy
- Abstract summary: We propose a hop-count based model (HCM) to detect anomalies by modeling both local and global contextual information.
To make better use of hop counts for anomaly identification, we propose to use hop counts prediction as a self-supervised task.
- Score: 8.608288231153304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed an upsurge of interest in the problem of anomaly
detection on attributed networks due to its importance in both research and
practice. Although various approaches have been proposed to solve this problem,
two major limitations exist: (1) unsupervised approaches usually work much less
efficiently due to the lack of supervisory signal, and (2) existing anomaly
detection methods only use local contextual information to detect anomalous
nodes, e.g., one- or two-hop information, but ignore the global contextual
information. Since anomalous nodes differ from normal nodes in structures and
attributes, it is intuitive that the distance between anomalous nodes and their
neighbors should be larger than that between normal nodes and their neighbors
if we remove the edges connecting anomalous and normal nodes. Thus, hop counts
based on both global and local contextual information can be served as the
indicators of anomaly. Motivated by this intuition, we propose a hop-count
based model (HCM) to detect anomalies by modeling both local and global
contextual information. To make better use of hop counts for anomaly
identification, we propose to use hop counts prediction as a self-supervised
task. We design two anomaly scores based on the hop counts prediction via HCM
model to identify anomalies. Besides, we employ Bayesian learning to train HCM
model for capturing uncertainty in learned parameters and avoiding overfitting.
Extensive experiments on real-world attributed networks demonstrate that our
proposed model is effective in anomaly detection.
Related papers
- Alleviating Structural Distribution Shift in Graph Anomaly Detection [70.1022676681496]
Graph anomaly detection (GAD) is a challenging binary classification problem.
Gallon neural networks (GNNs) benefit the classification of normals from aggregating homophilous neighbors.
We propose a framework to mitigate the effect of heterophilous neighbors and make them invariant.
arXiv Detail & Related papers (2024-01-25T13:07:34Z) - Multitask Active Learning for Graph Anomaly Detection [48.690169078479116]
We propose a novel MultItask acTIve Graph Anomaly deTEction framework, namely MITIGATE.
By coupling node classification tasks, MITIGATE obtains the capability to detect out-of-distribution nodes without known anomalies.
Empirical studies on four datasets demonstrate that MITIGATE significantly outperforms the state-of-the-art methods for anomaly detection.
arXiv Detail & Related papers (2024-01-24T03:43:45Z) - SCALA: Sparsification-based Contrastive Learning for Anomaly Detection
on Attributed Networks [19.09775548036214]
Anomaly detection on attributed networks aims to find the nodes whose behaviors are significantly different from other majority nodes.
We present a novel contrastive learning framework for anomaly detection on attributed networks, textbfSCALA, aiming to improve the embedding quality of the network.
Extensive experiments are conducted on five benchmark real-world datasets and the results show that SCALA consistently outperforms all baseline methods significantly.
arXiv Detail & Related papers (2024-01-03T08:51:18Z) - Open-Set Graph Anomaly Detection via Normal Structure Regularisation [30.638274744518682]
Open-set Graph Anomaly Detection (GAD) aims to train a detection model using a small number of normal and anomaly nodes.
Current supervised GAD methods tend to over-emphasise fitting the seen anomalies, leading to many errors of detecting the unseen anomalies as normal nodes.
We propose a novel open-set GAD approach, namely normal structure regularisation (NSReg), to achieve generalised detection ability to unseen anomalies.
arXiv Detail & Related papers (2023-11-12T13:25:28Z) - Label-based Graph Augmentation with Metapath for Graph Anomaly Detection [8.090325400557697]
We present a new framework, Metapath-based Graph Anomaly Detection (MGAD), incorporating GCN layers in both the dual-encoders and decoders.
Through a comprehensive set of experiments conducted on seven real-world networks, this paper demonstrates the superiority of the MGAD method compared to state-of-the-art techniques.
arXiv Detail & Related papers (2023-08-21T05:41:05Z) - Decoupling anomaly discrimination and representation learning:
self-supervised learning for anomaly detection on attributed graph [18.753970895946814]
DSLAD is a self-supervised method with anomaly discrimination and representation learning decoupled for anomaly detection.
Experiments conducted on various six benchmark datasets reveal the effectiveness of DSLAD.
arXiv Detail & Related papers (2023-04-11T12:23:40Z) - ARISE: Graph Anomaly Detection on Attributed Networks via Substructure
Awareness [70.60721571429784]
We propose a new graph anomaly detection framework on attributed networks via substructure awareness (ARISE)
ARISE focuses on the substructures in the graph to discern abnormalities.
Experiments show that ARISE greatly improves detection performance compared to state-of-the-art attributed networks anomaly detection (ANAD) algorithms.
arXiv Detail & Related papers (2022-11-28T12:17:40Z) - Are we certain it's anomalous? [57.729669157989235]
Anomaly detection in time series is a complex task since anomalies are rare due to highly non-linear temporal correlations.
Here we propose the novel use of Hyperbolic uncertainty for Anomaly Detection (HypAD)
HypAD learns self-supervisedly to reconstruct the input signal.
arXiv Detail & Related papers (2022-11-16T21:31:39Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Few-shot Network Anomaly Detection via Cross-network Meta-learning [45.8111239825361]
We propose a new family of graph neural networks -- Graph Deviation Networks (GDN)
GDN can leverage a small number of labeled anomalies for enforcing statistically significant deviations between abnormal and normal nodes on a network.
We equip the proposed GDN with a new cross-network meta-learning algorithm to realize few-shot network anomaly detection.
arXiv Detail & Related papers (2021-02-22T16:42:37Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.