NADS: Neural Architecture Distribution Search for Uncertainty Awareness
- URL: http://arxiv.org/abs/2006.06646v1
- Date: Thu, 11 Jun 2020 17:39:07 GMT
- Title: NADS: Neural Architecture Distribution Search for Uncertainty Awareness
- Authors: Randy Ardywibowo, Shahin Boluki, Xinyu Gong, Zhangyang Wang, Xiaoning
Qian
- Abstract summary: Machine learning (ML) systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a distribution different from training data.
Existing OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples.
We propose Neural Architecture Distribution Search (NADS) to identify common building blocks among all uncertainty-aware architectures.
- Score: 79.18710225716791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) systems often encounter Out-of-Distribution (OoD)
errors when dealing with testing data coming from a distribution different from
training data. It becomes important for ML systems in critical applications to
accurately quantify its predictive uncertainty and screen out these anomalous
inputs. However, existing OoD detection approaches are prone to errors and even
sometimes assign higher likelihoods to OoD samples. Unlike standard learning
tasks, there is currently no well established guiding principle for designing
OoD detection architectures that can accurately quantify uncertainty. To
address these problems, we first seek to identify guiding principles for
designing uncertainty-aware architectures, by proposing Neural Architecture
Distribution Search (NADS). NADS searches for a distribution of architectures
that perform well on a given task, allowing us to identify common building
blocks among all uncertainty-aware architectures. With this formulation, we are
able to optimize a stochastic OoD detection objective and construct an ensemble
of models to perform OoD detection. We perform multiple OoD detection
experiments and observe that our NADS performs favorably, with up to 57%
improvement in accuracy compared to state-of-the-art methods among 15 different
testing configurations.
Related papers
- Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Quantifying uncertainty for deep learning based forecasting and
flow-reconstruction using neural architecture search ensembles [0.8258451067861933]
We present an automated approach to deep neural network (DNN) discovery and demonstrate how this may also be utilized for ensemble-based uncertainty quantification.
We highlight how the proposed method not only discovers high-performing neural network ensembles for our tasks, but also quantifies uncertainty seamlessly.
We demonstrate the feasibility of this framework for two tasks - forecasting from historical data and flow reconstruction from sparse sensors for the sea-surface temperature.
arXiv Detail & Related papers (2023-02-20T03:57:06Z) - Diffusion Denoising Process for Perceptron Bias in Out-of-distribution
Detection [67.49587673594276]
We introduce a new perceptron bias assumption that suggests discriminator models are more sensitive to certain features of the input, leading to the overconfidence problem.
We demonstrate that the diffusion denoising process (DDP) of DMs serves as a novel form of asymmetric, which is well-suited to enhance the input and mitigate the overconfidence problem.
Our experiments on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA approaches.
arXiv Detail & Related papers (2022-11-21T08:45:08Z) - Validation Diagnostics for SBI algorithms based on Normalizing Flows [55.41644538483948]
This work proposes easy to interpret validation diagnostics for multi-dimensional conditional (posterior) density estimators based on NF.
It also offers theoretical guarantees based on results of local consistency.
This work should help the design of better specified models or drive the development of novel SBI-algorithms.
arXiv Detail & Related papers (2022-11-17T15:48:06Z) - A Review of Uncertainty Calibration in Pretrained Object Detectors [5.440028715314566]
We investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting.
We propose a framework to ensure a fair, unbiased, and repeatable evaluation.
We deliver novel insights into why poor detector calibration emerges.
arXiv Detail & Related papers (2022-10-06T14:06:36Z) - iDECODe: In-distribution Equivariance for Conformal Out-of-distribution
Detection [24.518698391381204]
Machine learning methods such as deep neural networks (DNNs) often generate incorrect predictions with high confidence.
We propose the new method iDECODe, leveraging in-distribution equivariance for conformal OOD detection.
We demonstrate the efficacy of iDECODe by experiments on image and audio datasets, obtaining state-of-the-art results.
arXiv Detail & Related papers (2022-01-07T05:21:40Z) - Detecting OODs as datapoints with High Uncertainty [12.040347694782007]
Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution inputs (OODs)
This limitation is one of the key challenges in the adoption of DNNs in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis.
Several techniques have been developed to detect inputs where the model's prediction cannot be trusted.
We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric)
arXiv Detail & Related papers (2021-08-13T20:07:42Z) - Sketching Curvature for Efficient Out-of-Distribution Detection for Deep
Neural Networks [32.629801680158685]
Sketching Curvature of OoD Detection (SCOD) is an architecture-agnostic framework for equipping trained Deep Neural Networks with task-relevant uncertainty estimates.
We demonstrate that SCOD achieves comparable or better OoD detection performance with lower computational burden relative to existing baselines.
arXiv Detail & Related papers (2021-02-24T21:34:40Z) - SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier
Detection [63.253850875265115]
Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples.
We propose a modular acceleration system, called SUOD, to address it.
arXiv Detail & Related papers (2020-03-11T00:22:50Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.