Reduced Robust Random Cut Forest for Out-Of-Distribution detection in
machine learning models
- URL: http://arxiv.org/abs/2206.09247v1
- Date: Sat, 18 Jun 2022 17:01:40 GMT
- Title: Reduced Robust Random Cut Forest for Out-Of-Distribution detection in
machine learning models
- Authors: Harsh Vardhan, Janos Sztipanovits
- Abstract summary: Most machine learning-based regressors extract information from data collected via past observations of limited length to make predictions in the future.
When input to these trained models is data with significantly different statistical properties from data used for training, there is no guarantee of accurate prediction.
We introduce a novel approach for this detection process using a Reduced Robust Random Cut Forest data structure.
- Score: 0.799536002595393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most machine learning-based regressors extract information from data
collected via past observations of limited length to make predictions in the
future. Consequently, when input to these trained models is data with
significantly different statistical properties from data used for training,
there is no guarantee of accurate prediction. Consequently, using these models
on out-of-distribution input data may result in a completely different
predicted outcome from the desired one, which is not only erroneous but can
also be hazardous in some cases. Successful deployment of these machine
learning models in any system requires a detection system, which should be able
to distinguish between out-of-distribution and in-distribution data (i.e.
similar to training data). In this paper, we introduce a novel approach for
this detection process using a Reduced Robust Random Cut Forest (RRRCF) data
structure, which can be used on both small and large data sets. Similar to the
Robust Random Cut Forest (RRCF), RRRCF is a structured, but a reduced
representation of the training data sub-space in form of cut trees. Empirical
results of this method on both low and high-dimensional data showed that
inference about data being in/out of training distribution can be made
efficiently and the model is easy to train with no difficult hyper-parameter
tuning. The paper discusses two different use-cases for testing and validating
results.
Related papers
- Usage-Specific Survival Modeling Based on Operational Data and Neural Networks [0.3999851878220878]
The presented methodology is based on neural network-based survival models that are trained using data that is continuously gathered and stored at specific times, called snapshots.
The papers show that if the data is in a specific format where all snapshot times are the same for all individuals, maximum likelihood training can be applied and produce desirable results.
To reduce the number of samples needed during training, the paper also proposes a technique to, instead of resampling the dataset once before the training starts, randomly resample the dataset at the start of each epoch during the training.
arXiv Detail & Related papers (2024-03-27T16:32:32Z) - Learning Defect Prediction from Unrealistic Data [57.53586547895278]
Pretrained models of code have become popular choices for code understanding and generation tasks.
Such models tend to be large and require commensurate volumes of training data.
It has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs.
Models trained on such data tend to only perform well on similar data, while underperforming on real world programs.
arXiv Detail & Related papers (2023-11-02T01:51:43Z) - An unfolding method based on conditional Invertible Neural Networks
(cINN) using iterative training [0.0]
Generative networks like invertible neural networks(INN) enable a probabilistic unfolding.
We introduce the iterative conditional INN(IcINN) for unfolding that adjusts for deviations between simulated training samples and data.
arXiv Detail & Related papers (2022-12-16T19:00:05Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - PROMISSING: Pruning Missing Values in Neural Networks [0.0]
We propose a simple and intuitive yet effective method for pruning missing values (PROMISSING) during learning and inference steps in neural networks.
Our experiments show that PROMISSING results in similar prediction performance compared to various imputation techniques.
arXiv Detail & Related papers (2022-06-03T15:37:27Z) - Conformal prediction for the design problem [72.14982816083297]
In many real-world deployments of machine learning, we use a prediction algorithm to choose what data to test next.
In such settings, there is a distinct type of distribution shift between the training and test data.
We introduce a method to quantify predictive uncertainty in such settings.
arXiv Detail & Related papers (2022-02-08T02:59:12Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Unsupervised Model Drift Estimation with Batch Normalization Statistics
for Dataset Shift Detection and Model Selection [0.0]
We propose a novel method of model drift estimation by exploiting statistics of batch normalization layer on unlabeled test data.
We show the effectiveness of our method not only on dataset shift detection but also on model selection when there are multiple candidate models among model zoo or training trajectories in an unsupervised way.
arXiv Detail & Related papers (2021-07-01T03:04:47Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.