Improved Anomaly Detection by Using the Attention-Based Isolation Forest
- URL: http://arxiv.org/abs/2210.02558v1
- Date: Wed, 5 Oct 2022 20:58:57 GMT
- Title: Improved Anomaly Detection by Using the Attention-Based Isolation Forest
- Authors: Lev V. Utkin and Andrey Y. Ageev and Andrei V. Konstantinov
- Abstract summary: Attention-Based Isolation Forest (ABIForest) for solving anomaly detection problem is proposed.
The main idea is to assign attention weights to each path of trees with learnable parameters depending on instances and trees themselves.
ABIForest can be viewed as the first modification of Isolation Forest, which incorporates the attention mechanism in a simple way without applying gradient-based algorithms.
- Score: 4.640835690336653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A new modification of Isolation Forest called Attention-Based Isolation
Forest (ABIForest) for solving the anomaly detection problem is proposed. It
incorporates the attention mechanism in the form of the Nadaraya-Watson
regression into the Isolation Forest for improving solution of the anomaly
detection problem. The main idea underlying the modification is to assign
attention weights to each path of trees with learnable parameters depending on
instances and trees themselves. The Huber's contamination model is proposed to
be used for defining the attention weights and their parameters. As a result,
the attention weights are linearly depend on the learnable attention parameters
which are trained by solving the standard linear or quadratic optimization
problem. ABIForest can be viewed as the first modification of Isolation Forest,
which incorporates the attention mechanism in a simple way without applying
gradient-based algorithms. Numerical experiments with synthetic and real
datasets illustrate outperforming results of ABIForest. The code of proposed
algorithms is available.
Related papers
- Effort: Efficient Orthogonal Modeling for Generalizable AI-Generated Image Detection [66.16595174895802]
Existing AI-generated image (AIGI) detection methods often suffer from limited generalization performance.
In this paper, we identify a crucial yet previously overlooked asymmetry phenomenon in AIGI detection.
arXiv Detail & Related papers (2024-11-23T19:10:32Z) - PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings [55.55445978692678]
PseudoNeg-MAE is a self-supervised learning framework that enhances global feature representation of point cloud mask autoencoders.
We show that PseudoNeg-MAE achieves state-of-the-art performance on the ModelNet40 and ScanObjectNN datasets.
arXiv Detail & Related papers (2024-09-24T07:57:21Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Inverse
Problems with Denoising Diffusion Restoration [64.8770356696056]
We propose GibbsDDRM, an extension of Denoising Diffusion Restoration Models (DDRM) to a blind setting in which the linear measurement operator is unknown.
The proposed method is problem-agnostic, meaning that a pre-trained diffusion model can be applied to various inverse problems without fine-tuning.
arXiv Detail & Related papers (2023-01-30T06:27:48Z) - LARF: Two-level Attention-based Random Forests with a Mixture of
Contamination Models [5.482532589225552]
New models of the attention-based random forests called LARF (Leaf Attention-based Random Forest) are proposed.
The first idea is to introduce a two-level attention, where one of the levels is the "leaf" attention and the attention mechanism is applied to every leaf of trees.
The second idea is to replace the softmax operation in the attention with the weighted sum of the softmax operations with different parameters.
arXiv Detail & Related papers (2022-10-11T06:14:12Z) - AGBoost: Attention-based Modification of Gradient Boosting Machine [0.0]
A new attention-based model for the gradient boosting machine (GBM) called AGBoost is proposed for solving regression problems.
The main idea behind the proposed AGBoost model is to assign attention weights with trainable parameters to iterations of GBM.
arXiv Detail & Related papers (2022-07-12T17:42:20Z) - Attention and Self-Attention in Random Forests [5.482532589225552]
New models of random forests jointly using the attention and self-attention mechanisms are proposed.
The self-attention aims to capture dependencies of the tree predictions and to remove noise or anomalous predictions in the random forest.
arXiv Detail & Related papers (2022-07-09T16:15:53Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Attention-based Random Forest and Contamination Model [5.482532589225552]
The main idea behind the proposed ABRF models is to assign attention weights with trainable parameters to decision trees in a specific way.
The weights depend on the distance between an instance, which falls into a corresponding leaf of a tree, and instances, which fall in the same leaf.
arXiv Detail & Related papers (2022-01-08T19:35:57Z) - Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest
Feature Importance [4.769747792846005]
Anomaly Detection is an unsupervised learning task aimed at detecting anomalous behaviours with respect to historical data.
The Isolation Forest is one of the most commonly adopted algorithms in the field of Anomaly Detection.
This paper proposes methods to define feature importance scores at both global and local level for the Isolation Forest.
arXiv Detail & Related papers (2020-07-21T22:19:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.