Toward Reliable Models for Authenticating Multimedia Content: Detecting
Resampling Artifacts With Bayesian Neural Networks
- URL: http://arxiv.org/abs/2007.14132v1
- Date: Tue, 28 Jul 2020 11:23:40 GMT
- Title: Toward Reliable Models for Authenticating Multimedia Content: Detecting
Resampling Artifacts With Bayesian Neural Networks
- Authors: Anatol Maier, Benedikt Lorch, Christian Riess
- Abstract summary: We make a first step toward redesigning forensic algorithms with a strong focus on reliability.
We propose to use Bayesian neural networks (BNN), which combine the power of deep neural networks with the rigorous probabilistic formulation of a Bayesian framework.
BNN yields state-of-the-art detection performance, plus excellent capabilities for detecting out-of-distribution samples.
- Score: 9.857478771881741
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multimedia forensics, learning-based methods provide state-of-the-art
performance in determining origin and authenticity of images and videos.
However, most existing methods are challenged by out-of-distribution data,
i.e., with characteristics that are not covered in the training set. This makes
it difficult to know when to trust a model, particularly for practitioners with
limited technical background.
In this work, we make a first step toward redesigning forensic algorithms
with a strong focus on reliability. To this end, we propose to use Bayesian
neural networks (BNN), which combine the power of deep neural networks with the
rigorous probabilistic formulation of a Bayesian framework. Instead of
providing a point estimate like standard neural networks, BNNs provide
distributions that express both the estimate and also an uncertainty range.
We demonstrate the usefulness of this framework on a classical forensic task:
resampling detection. The BNN yields state-of-the-art detection performance,
plus excellent capabilities for detecting out-of-distribution samples. This is
demonstrated for three pathologic issues in resampling detection, namely unseen
resampling factors, unseen JPEG compression, and unseen resampling algorithms.
We hope that this proposal spurs further research toward reliability in
multimedia forensics.
Related papers
- On the Convergence of Locally Adaptive and Scalable Diffusion-Based Sampling Methods for Deep Bayesian Neural Network Posteriors [2.3265565167163906]
Bayesian neural networks are a promising approach for modeling uncertainties in deep neural networks.
generating samples from the posterior distribution of neural networks is a major challenge.
One advance in that direction would be the incorporation of adaptive step sizes into Monte Carlo Markov chain sampling algorithms.
In this paper, we demonstrate that these methods can have a substantial bias in the distribution they sample, even in the limit of vanishing step sizes and at full batch size.
arXiv Detail & Related papers (2024-03-13T15:21:14Z) - Quantifying uncertainty for deep learning based forecasting and
flow-reconstruction using neural architecture search ensembles [0.8258451067861933]
We present an automated approach to deep neural network (DNN) discovery and demonstrate how this may also be utilized for ensemble-based uncertainty quantification.
We highlight how the proposed method not only discovers high-performing neural network ensembles for our tasks, but also quantifies uncertainty seamlessly.
We demonstrate the feasibility of this framework for two tasks - forecasting from historical data and flow reconstruction from sparse sensors for the sea-surface temperature.
arXiv Detail & Related papers (2023-02-20T03:57:06Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - An out-of-distribution discriminator based on Bayesian neural network
epistemic uncertainty [0.19573380763700712]
Bayesian neural networks (BNNs) are an important type of neural network with built-in capability for quantifying uncertainty.
This paper discusses aleatoric and epistemic uncertainty in BNNs and how they can be calculated.
arXiv Detail & Related papers (2022-10-18T21:15:33Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - An Efficient Confidence Measure-Based Evaluation Metric for Breast
Cancer Screening Using Bayesian Neural Networks [3.834509400202395]
We propose a confidence measure-based evaluation metric for breast cancer screening.
We show that our confidence tuning results in increased accuracy with a reduced set of images with high confidence when compared to the baseline transfer learning.
arXiv Detail & Related papers (2020-08-12T20:34:14Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Scalable Quantitative Verification For Deep Neural Networks [44.570783946111334]
We propose a test-driven verification framework for deep neural networks (DNNs)
Our technique performs enough tests until soundness of a formal probabilistic property can be proven.
Our work paves the way for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees.
arXiv Detail & Related papers (2020-02-17T09:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.