Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection
- URL: http://arxiv.org/abs/2206.12911v1
- Date: Sun, 26 Jun 2022 16:00:22 GMT
- Title: Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection
- Authors: Xiongjie Chen, Yunpeng Li, Yongxin Yang
- Abstract summary: Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
- Score: 55.028065567756066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection has recently received much attention from
the machine learning community due to its importance in deploying machine
learning models in real-world applications. In this paper we propose an
uncertainty quantification approach by modelling the distribution of features.
We further incorporate an efficient ensemble mechanism, namely batch-ensemble,
to construct the batch-ensemble stochastic neural networks (BE-SNNs) and
overcome the feature collapse problem. We compare the performance of the
proposed BE-SNNs with the other state-of-the-art approaches and show that
BE-SNNs yield superior performance on several OOD benchmarks, such as the
Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionMNIST vs NotMNIST
dataset, and the CIFAR10 vs SVHN dataset.
Related papers
- Non-Linear Outlier Synthesis for Out-of-Distribution Detection [5.019613806273252]
We present NCIS, which enhances the quality of synthetic outliers by operating directly in the diffusion's model embedding space.
We demonstrate that these improvements yield new state-of-the-art OOD detection results on standard ImageNet100 and CIFAR100 benchmarks.
arXiv Detail & Related papers (2024-11-20T09:47:29Z) - Spintronics for image recognition: performance benchmarking via
ultrafast data-driven simulations [4.2412715094420665]
We present a demonstration of image classification using an echo-state network (ESN) relying on a single simulated spintronic nanostructure.
We employ an ultrafast data-driven simulation framework called the data-driven Thiele equation approach to simulate the STVO dynamics.
We showcase the versatility of our solution by successfully applying it to solve classification challenges with the MNIST, EMNIST-letters and Fashion MNIST datasets.
arXiv Detail & Related papers (2023-08-10T18:09:44Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Towards Robust k-Nearest-Neighbor Machine Translation [72.9252395037097]
k-Nearest-Neighbor Machine Translation (kNN-MT) becomes an important research direction of NMT in recent years.
Its main idea is to retrieve useful key-value pairs from an additional datastore to modify translations without updating the NMT model.
The underlying retrieved noisy pairs will dramatically deteriorate the model performance.
We propose a confidence-enhanced kNN-MT model with robust training to alleviate the impact of noise.
arXiv Detail & Related papers (2022-10-17T07:43:39Z) - coVariance Neural Networks [119.45320143101381]
Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning.
We propose a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs.
We show that VNN performance is indeed more stable than PCA-based statistical approaches.
arXiv Detail & Related papers (2022-05-31T15:04:43Z) - Accelerating Multi-Objective Neural Architecture Search by Random-Weight
Evaluation [24.44521525130034]
We introduce a new performance estimation metric named Random-Weight Evaluation (RWE) to quantify the quality of CNNs.
RWE only trains its last layer and leaves the remainders with randomly weights, which results in a single network evaluation in seconds.
Our proposed method obtains a set of efficient models with state-of-the-art performance in two real-world search spaces.
arXiv Detail & Related papers (2021-10-08T06:35:20Z) - Joint Distribution across Representation Space for Out-of-Distribution
Detection [16.96466730536722]
We present a novel outlook on in-distribution data in a generative manner, which takes their latent features generated from each hidden layer as a joint distribution across representation spaces.
We first construct the Gaussian Mixture Model (GMM) based on in-distribution latent features for each hidden layer, and then connect GMMs via the transition probabilities of the inference traces.
arXiv Detail & Related papers (2021-03-23T06:39:29Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Ensembles of Spiking Neural Networks [0.3007949058551534]
This paper demonstrates how to construct ensembles of spiking neural networks producing state-of-the-art results.
We achieve classification accuracies of 98.71%, 100.0%, and 99.09%, on the MNIST, NMNIST and DVS Gesture datasets respectively.
We formalize spiking neural networks as GLM predictors, identifying a suitable representation for their target domain.
arXiv Detail & Related papers (2020-10-15T17:45:18Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.