Attention-based Random Forest and Contamination Model
- URL: http://arxiv.org/abs/2201.02880v1
- Date: Sat, 8 Jan 2022 19:35:57 GMT
- Title: Attention-based Random Forest and Contamination Model
- Authors: Lev V. Utkin and Andrei V. Konstantinov
- Abstract summary: The main idea behind the proposed ABRF models is to assign attention weights with trainable parameters to decision trees in a specific way.
The weights depend on the distance between an instance, which falls into a corresponding leaf of a tree, and instances, which fall in the same leaf.
- Score: 5.482532589225552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A new approach called ABRF (the attention-based random forest) and its
modifications for applying the attention mechanism to the random forest (RF)
for regression and classification are proposed. The main idea behind the
proposed ABRF models is to assign attention weights with trainable parameters
to decision trees in a specific way. The weights depend on the distance between
an instance, which falls into a corresponding leaf of a tree, and instances,
which fall in the same leaf. This idea stems from representation of the
Nadaraya-Watson kernel regression in the form of a RF. Three modifications of
the general approach are proposed. The first one is based on applying the
Huber's contamination model and on computing the attention weights by solving
quadratic or linear optimization problems. The second and the third
modifications use the gradient-based algorithms for computing trainable
parameters. Numerical experiments with various regression and classification
datasets illustrate the proposed method.
Related papers
- Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Neural Attention Forests: Transformer-Based Forest Improvement [4.129225533930966]
The main idea behind the proposed NAF model is to introduce the attention mechanism into the random forest.
In contrast to the available models like the attention-based random forest, the attention weights and the Nadaraya-Watson regression are represented in the form of neural networks.
The combination of the random forest and neural networks implementing the attention mechanism forms a transformer for enhancing the forest predictions.
arXiv Detail & Related papers (2023-04-12T17:01:38Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - LARF: Two-level Attention-based Random Forests with a Mixture of
Contamination Models [5.482532589225552]
New models of the attention-based random forests called LARF (Leaf Attention-based Random Forest) are proposed.
The first idea is to introduce a two-level attention, where one of the levels is the "leaf" attention and the attention mechanism is applied to every leaf of trees.
The second idea is to replace the softmax operation in the attention with the weighted sum of the softmax operations with different parameters.
arXiv Detail & Related papers (2022-10-11T06:14:12Z) - Improved Anomaly Detection by Using the Attention-Based Isolation Forest [4.640835690336653]
Attention-Based Isolation Forest (ABIForest) for solving anomaly detection problem is proposed.
The main idea is to assign attention weights to each path of trees with learnable parameters depending on instances and trees themselves.
ABIForest can be viewed as the first modification of Isolation Forest, which incorporates the attention mechanism in a simple way without applying gradient-based algorithms.
arXiv Detail & Related papers (2022-10-05T20:58:57Z) - AGBoost: Attention-based Modification of Gradient Boosting Machine [0.0]
A new attention-based model for the gradient boosting machine (GBM) called AGBoost is proposed for solving regression problems.
The main idea behind the proposed AGBoost model is to assign attention weights with trainable parameters to iterations of GBM.
arXiv Detail & Related papers (2022-07-12T17:42:20Z) - Attention and Self-Attention in Random Forests [5.482532589225552]
New models of random forests jointly using the attention and self-attention mechanisms are proposed.
The self-attention aims to capture dependencies of the tree predictions and to remove noise or anomalous predictions in the random forest.
arXiv Detail & Related papers (2022-07-09T16:15:53Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Estimation of Switched Markov Polynomial NARX models [75.91002178647165]
We identify a class of models for hybrid dynamical systems characterized by nonlinear autoregressive (NARX) components.
The proposed approach is demonstrated on a SMNARX problem composed by three nonlinear sub-models with specific regressors.
arXiv Detail & Related papers (2020-09-29T15:00:47Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating
Back-Propagation for Saliency Detection [54.98042023365694]
We propose a noise-aware encoder-decoder framework to disentangle a clean saliency predictor from noisy training examples.
The proposed model consists of two sub-models parameterized by neural networks.
arXiv Detail & Related papers (2020-07-23T18:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.