LUNAR: Unifying Local Outlier Detection Methods via Graph Neural
Networks
- URL: http://arxiv.org/abs/2112.05355v1
- Date: Fri, 10 Dec 2021 06:50:32 GMT
- Title: LUNAR: Unifying Local Outlier Detection Methods via Graph Neural
Networks
- Authors: Adam Goodge, Bryan Hooi, See Kiong Ng, Wee Siong Ng
- Abstract summary: LUNAR learns to use information from the nearest neighbours of each node in a trainable way to find anomalies.
We show that our method performs significantly better than existing local outlier methods, as well as state-of-the-art deep baselines.
- Score: 17.586486249721265
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many well-established anomaly detection methods use the distance of a sample
to those in its local neighbourhood: so-called `local outlier methods', such as
LOF and DBSCAN. They are popular for their simple principles and strong
performance on unstructured, feature-based data that is commonplace in many
practical applications. However, they cannot learn to adapt for a particular
set of data due to their lack of trainable parameters. In this paper, we begin
by unifying local outlier methods by showing that they are particular cases of
the more general message passing framework used in graph neural networks. This
allows us to introduce learnability into local outlier methods, in the form of
a neural network, for greater flexibility and expressivity: specifically, we
propose LUNAR, a novel, graph neural network-based anomaly detection method.
LUNAR learns to use information from the nearest neighbours of each node in a
trainable way to find anomalies. We show that our method performs significantly
better than existing local outlier methods, as well as state-of-the-art deep
baselines. We also show that the performance of our method is much more robust
to different settings of the local neighbourhood size.
Related papers
- Inferring Neural Signed Distance Functions by Overfitting on Single Noisy Point Clouds through Finetuning Data-Driven based Priors [53.6277160912059]
We propose a method to promote pros of data-driven based and overfitting-based methods for better generalization, faster inference, and higher accuracy in learning neural SDFs.
We introduce a novel statistical reasoning algorithm in local regions which is able to finetune data-driven based priors without signed distance supervision, clean point cloud, or point normals.
arXiv Detail & Related papers (2024-10-25T16:48:44Z) - Deep Homography Estimation for Visual Place Recognition [49.235432979736395]
We propose a transformer-based deep homography estimation (DHE) network.
It takes the dense feature map extracted by a backbone network as input and fits homography for fast and learnable geometric verification.
Experiments on benchmark datasets show that our method can outperform several state-of-the-art methods.
arXiv Detail & Related papers (2024-02-25T13:22:17Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Local Augmentation for Graph Neural Networks [78.48812244668017]
We introduce the local augmentation, which enhances node features by its local subgraph structures.
Based on the local augmentation, we further design a novel framework: LA-GNN, which can apply to any GNN models in a plug-and-play manner.
arXiv Detail & Related papers (2021-09-08T18:10:08Z) - Real-time Outdoor Localization Using Radio Maps: A Deep Learning
Approach [59.17191114000146]
LocUNet: A convolutional, end-to-end trained neural network (NN) for the localization task.
We show that LocUNet can localize users with state-of-the-art accuracy and enjoys high robustness to inaccuracies in the estimations of radio maps.
arXiv Detail & Related papers (2021-06-23T17:27:04Z) - Communication-Efficient Sampling for Distributed Training of Graph
Convolutional Networks [3.075766050800645]
Training Graph Convolutional Networks (GCNs) is expensive as it needs to aggregate data from neighboring nodes.
Previous works have proposed various neighbor sampling methods that estimate the aggregation result based on a small number of sampled neighbors.
We present an algorithm that determines the local sampling probabilities and makes sure our skewed neighbor sampling does not affect much the convergence of the training.
arXiv Detail & Related papers (2021-01-19T16:12:44Z) - Outlier Detection through Null Space Analysis of Neural Networks [3.220347094114561]
We use the concept of the null space to integrate an outlier detection method directly into a neural network used for classification.
Our method, called Null Space Analysis (NuSA) of neural networks, works by computing and controlling the magnitude of the null space projection as data is passed through a network.
Results are shown that indicate networks trained with NuSA retain their classification performance while also being able to detect outliers at rates similar to commonly used outlier detection algorithms.
arXiv Detail & Related papers (2020-07-02T17:17:21Z) - PushNet: Efficient and Adaptive Neural Message Passing [1.9121961872220468]
Message passing neural networks have recently evolved into a state-of-the-art approach to representation learning on graphs.
Existing methods perform synchronous message passing along all edges in multiple subsequent rounds.
We consider a novel asynchronous message passing approach where information is pushed only along the most relevant edges until convergence.
arXiv Detail & Related papers (2020-03-04T18:15:30Z) - Fast local linear regression with anchor regularization [21.739281173516247]
We propose a simple yet effective local model training algorithm called the fast anchor regularized local linear method (FALL)
Through experiments on synthetic and real-world datasets, we demonstrate that FALL compares favorably in terms of accuracy with the state-of-the-art network Lasso algorithm.
arXiv Detail & Related papers (2020-02-21T10:03:33Z) - A flexible outlier detector based on a topology given by graph
communities [0.0]
anomaly detection is essential for optimal performance of machine learning methods and statistical predictive models.
Topology is computed using the communities of a weighted graph codifying mutual nearest neighbors in the feature space.
Our approach overall outperforms, both, local and global strategies in multi and single view settings.
arXiv Detail & Related papers (2020-02-18T18:40:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.