Scene Uncertainty and the Wellington Posterior of Deterministic Image
Classifiers
- URL: http://arxiv.org/abs/2106.13870v2
- Date: Fri, 24 Mar 2023 23:36:38 GMT
- Title: Scene Uncertainty and the Wellington Posterior of Deterministic Image
Classifiers
- Authors: Stephanie Tsuei, Aditya Golatkar, Stefano Soatto
- Abstract summary: We introduce the Wellington Posterior, which is the distribution of outcomes that would have been obtained in response to data that could have been generated by the same scene.
We explore the use of data augmentation, dropout, ensembling, single-view reconstruction, and model linearization to compute a Wellington Posterior.
Additional methods include the use of conditional generative models such as generative adversarial networks, neural radiance fields, and conditional prior networks.
- Score: 68.9065881270224
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method to estimate the uncertainty of the outcome of an image
classifier on a given input datum. Deep neural networks commonly used for image
classification are deterministic maps from an input image to an output class.
As such, their outcome on a given datum involves no uncertainty, so we must
specify what variability we are referring to when defining, measuring and
interpreting uncertainty, and attributing "confidence" to the outcome. To this
end, we introduce the Wellington Posterior, which is the distribution of
outcomes that would have been obtained in response to data that could have been
generated by the same scene that produced the given image. Since there are
infinitely many scenes that could have generated any given image, the
Wellington Posterior involves inductive transfer from scenes other than the one
portrayed. We explore the use of data augmentation, dropout, ensembling,
single-view reconstruction, and model linearization to compute a Wellington
Posterior. Additional methods include the use of conditional generative models
such as generative adversarial networks, neural radiance fields, and
conditional prior networks. We test these methods against the empirical
posterior obtained by performing inference on multiple images of the same
underlying scene. These developments are only a small step towards assessing
the reliability of deep network classifiers in a manner that is compatible with
safety-critical applications and human interpretation.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Data Attribution for Text-to-Image Models by Unlearning Synthesized Images [71.23012718682634]
The goal of data attribution for text-to-image models is to identify the training images that most influence the generation of a new image.
We propose a new approach that efficiently identifies highly-influential images.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - Generator Born from Classifier [66.56001246096002]
We aim to reconstruct an image generator, without relying on any data samples.
We propose a novel learning paradigm, in which the generator is trained to ensure that the convergence conditions of the network parameters are satisfied.
arXiv Detail & Related papers (2023-12-05T03:41:17Z) - Counterfactual Image Generation for adversarially robust and
interpretable Classifiers [1.3859669037499769]
We propose a unified framework leveraging image-to-image translation Generative Adrial Networks (GANs) to produce counterfactual samples.
This is achieved by combining the classifier and discriminator into a single model that attributes real images to their respective classes and flags generated images as "fake"
We show how the model exhibits improved robustness to adversarial attacks, and we show how the discriminator's "fakeness" value serves as an uncertainty measure of the predictions.
arXiv Detail & Related papers (2023-10-01T18:50:29Z) - Adversarial Sampling for Fairness Testing in Deep Neural Network [0.0]
adversarial sampling to test for fairness in prediction of deep neural network model across different classes of image in a given dataset.
We trained our neural network model on the original image, and without training our model on the perturbed or attacked image.
When we feed the adversarial samplings to our model, it was able to predict the original category/ class of the image the adversarial sample belongs to.
arXiv Detail & Related papers (2023-03-06T03:55:37Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Robustness and invariance properties of image classifiers [8.970032486260695]
Deep neural networks have achieved impressive results in many image classification tasks.
Deep networks are not robust to a large variety of semantic-preserving image modifications.
The poor robustness of image classifiers to small data distribution shifts raises serious concerns regarding their trustworthiness.
arXiv Detail & Related papers (2022-08-30T11:00:59Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Conditional Variational Autoencoder for Learned Image Reconstruction [5.487951901731039]
We develop a novel framework that approximates the posterior distribution of the unknown image at each query observation.
It handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets.
arXiv Detail & Related papers (2021-10-22T10:02:48Z) - Just Noticeable Difference for Machine Perception and Generation of
Regularized Adversarial Images with Minimal Perturbation [8.920717493647121]
We introduce a measure for machine perception inspired by the concept of Just Noticeable Difference (JND) of human perception.
We suggest an adversarial image generation algorithm, which iteratively distorts an image by an additive noise until the machine learning model detects the change in the image by outputting a false label.
We evaluate the adversarial images generated by our algorithm both qualitatively and quantitatively on CIFAR10, ImageNet, and MS COCO datasets.
arXiv Detail & Related papers (2021-02-16T11:01:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.