Investigating the Adversarial Robustness of Density Estimation Using the
Probability Flow ODE
- URL: http://arxiv.org/abs/2310.07084v1
- Date: Tue, 10 Oct 2023 23:58:53 GMT
- Title: Investigating the Adversarial Robustness of Density Estimation Using the
Probability Flow ODE
- Authors: Marius Arvinte, Cory Cornelius, Jason Martin, Nageen Himayat
- Abstract summary: We introduce and evaluate six gradient-based log-likelihood attacks, including a novel reverse integration attack.
Our experimental evaluations on CIFAR-10 show that density estimation using the PF ODE is robust against high-complexity, high-likelihood attacks, and that in some cases adversarial samples are semantically meaningful, as expected from a robust estimator.
- Score: 4.7818621660181595
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Beyond their impressive sampling capabilities, score-based diffusion models
offer a powerful analysis tool in the form of unbiased density estimation of a
query sample under the training data distribution. In this work, we investigate
the robustness of density estimation using the probability flow (PF) neural
ordinary differential equation (ODE) model against gradient-based likelihood
maximization attacks and the relation to sample complexity, where the
compressed size of a sample is used as a measure of its complexity. We
introduce and evaluate six gradient-based log-likelihood maximization attacks,
including a novel reverse integration attack. Our experimental evaluations on
CIFAR-10 show that density estimation using the PF ODE is robust against
high-complexity, high-likelihood attacks, and that in some cases adversarial
samples are semantically meaningful, as expected from a robust estimator.
Related papers
- A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models [6.647819824559201]
We study the large-sample properties of a likelihood-based approach for estimating conditional deep generative models.
Our results lead to the convergence rate of a sieve maximum likelihood estimator for estimating the conditional distribution.
arXiv Detail & Related papers (2024-10-02T20:46:21Z) - Model Free Prediction with Uncertainty Assessment [7.524024486998338]
We propose a novel framework that transforms the deep estimation paradigm into a platform conducive to conditional mean estimation.
We develop an end-to-end convergence rate for the conditional diffusion model and establish the normality of the generated samples.
Through numerical experiments, we empirically validate the efficacy of our proposed methodology.
arXiv Detail & Related papers (2024-05-21T11:19:50Z) - Analysis of learning a flow-based generative model from limited sample complexity [39.771578460963774]
We study the problem of training a flow-based generative model, parametrized by a two-layer autoencoder, to sample from a high-dimensional Gaussian mixture.
arXiv Detail & Related papers (2023-10-05T14:53:40Z) - Anomaly Detection with Variance Stabilized Density Estimation [49.46356430493534]
We present a variance-stabilized density estimation problem for maximizing the likelihood of the observed samples.
To obtain a reliable anomaly detector, we introduce a spectral ensemble of autoregressive models for learning the variance-stabilized distribution.
We have conducted an extensive benchmark with 52 datasets, demonstrating that our method leads to state-of-the-art results.
arXiv Detail & Related papers (2023-06-01T11:52:58Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - DensePure: Understanding Diffusion Models towards Adversarial Robustness [110.84015494617528]
We analyze the properties of diffusion models and establish the conditions under which they can enhance certified robustness.
We propose a new method DensePure, designed to improve the certified robustness of a pretrained model (i.e. a classifier)
We show that this robust region is a union of multiple convex sets, and is potentially much larger than the robust regions identified in previous works.
arXiv Detail & Related papers (2022-11-01T08:18:07Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Conditional Density Estimation via Weighted Logistic Regressions [0.30458514384586394]
We propose a novel parametric conditional density estimation method by showing the connection between the general density and the likelihood function of inhomogeneous process models.
The maximum likelihood estimates can be obtained via weighted logistic regressions, and the computation can be significantly relaxed by combining a block-wise alternating scheme and local case-control sampling.
arXiv Detail & Related papers (2020-10-21T11:08:25Z) - Improving Maximum Likelihood Training for Text Generation with Density
Ratio Estimation [51.091890311312085]
We propose a new training scheme for auto-regressive sequence generative models, which is effective and stable when operating at large sample space encountered in text generation.
Our method stably outperforms Maximum Likelihood Estimation and other state-of-the-art sequence generative models in terms of both quality and diversity.
arXiv Detail & Related papers (2020-07-12T15:31:24Z) - TraDE: Transformers for Density Estimation [101.20137732920718]
TraDE is a self-attention-based architecture for auto-regressive density estimation.
We present a suite of tasks such as regression using generated samples, out-of-distribution detection, and robustness to noise in the training data.
arXiv Detail & Related papers (2020-04-06T07:32:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.