Auto-Encoding Goodness of Fit
- URL: http://arxiv.org/abs/2210.06546v1
- Date: Wed, 12 Oct 2022 19:21:57 GMT
- Title: Auto-Encoding Goodness of Fit
- Authors: Aaron Palmer, Zhiyi Chi, Derek Aguiar, Jinbo Bi
- Abstract summary: We develop the Goodness of Fit Autoencoder (GoFAE), which incorporates hypothesis tests at two levels.
GoFAE achieves comparable FID scores and mean squared errors with competing deep generative models.
- Score: 11.543670549371361
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For generative autoencoders to learn a meaningful latent representation for
data generation, a careful balance must be achieved between reconstruction
error and how close the distribution in the latent space is to the prior.
However, this balance is challenging to achieve due to a lack of criteria that
work both at the mini-batch (local) and aggregated posterior (global) level.
Goodness of fit (GoF) hypothesis tests provide a measure of statistical
indistinguishability between the latent distribution and a target distribution
class. In this work, we develop the Goodness of Fit Autoencoder (GoFAE), which
incorporates hypothesis tests at two levels. At the mini-batch level, it uses
GoF test statistics as regularization objectives. At a more global level, it
selects a regularization coefficient based on higher criticism, i.e., a test on
the uniformity of the local GoF p-values. We justify the use of GoF tests by
providing a relaxed $L_2$-Wasserstein bound on the distance between the latent
distribution and target prior. We propose to use GoF tests and prove that
optimization based on these tests can be done with stochastic gradient (SGD)
descent on a compact Riemannian manifold. Empirically, we show that our higher
criticism parameter selection procedure balances reconstruction and generation
using mutual information and uniformity of p-values respectively. Finally, we
show that GoFAE achieves comparable FID scores and mean squared errors with
competing deep generative models while retaining statistical
indistinguishability from Gaussian in the latent space based on a variety of
hypothesis tests.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Federated Nonparametric Hypothesis Testing with Differential Privacy Constraints: Optimal Rates and Adaptive Tests [5.3595271893779906]
Federated learning has attracted significant recent attention due to its applicability across a wide range of settings where data is collected and analyzed across disparate locations.
We study federated nonparametric goodness-of-fit testing in the white-noise-with-drift model under distributed differential privacy (DP) constraints.
arXiv Detail & Related papers (2024-06-10T19:25:19Z) - AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation [64.9230895853942]
Domain generalization can be arbitrarily hard without exploiting target domain information.
Test-time adaptive (TTA) methods are proposed to address this issue.
In this work, we adopt Non-Parametric to perform the test-time Adaptation (AdaNPC)
arXiv Detail & Related papers (2023-04-25T04:23:13Z) - Adaptive Conformal Prediction by Reweighting Nonconformity Score [0.0]
We use a Quantile Regression Forest (QRF) to learn the distribution of nonconformity scores and utilize the QRF's weights to assign more importance to samples with residuals similar to the test point.
Our approach enjoys an assumption-free finite sample marginal and training-conditional coverage, and under suitable assumptions, it also ensures conditional coverage.
arXiv Detail & Related papers (2023-03-22T16:42:19Z) - Consistent Diffusion Models: Mitigating Sampling Drift by Learning to be
Consistent [97.64313409741614]
We propose to enforce a emphconsistency property which states that predictions of the model on its own generated data are consistent across time.
We show that our novel training objective yields state-of-the-art results for conditional and unconditional generation in CIFAR-10 and baseline improvements in AFHQ and FFHQ.
arXiv Detail & Related papers (2023-02-17T18:45:04Z) - Functional Linear Regression of Cumulative Distribution Functions [20.96177061945288]
We propose functional ridge-regression-based estimation methods that estimate CDFs accurately everywhere.
We show estimation error upper bounds of $widetilde O(sqrtd/n)$ for fixed design, random design, and adversarial context cases.
We formalize infinite dimensional models where the parameter space is an infinite dimensional Hilbert space, and establish a self-normalized estimation error upper bound for this setting.
arXiv Detail & Related papers (2022-05-28T23:59:50Z) - Kernel Robust Hypothesis Testing [20.78285964841612]
In this paper, uncertainty sets are constructed in a data-driven manner using kernel method.
The goal is to design a test that performs well under the worst-case distributions over the uncertainty sets.
For the Neyman-Pearson setting, the goal is to minimize the worst-case probability of miss detection subject to a constraint on the worst-case probability of false alarm.
arXiv Detail & Related papers (2022-03-23T23:59:03Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - Generalizing Variational Autoencoders with Hierarchical Empirical Bayes [6.273154057349038]
We present Hierarchical Empirical Bayes Autoencoder (HEBAE), a computationally stable framework for probabilistic generative models.
Our key contributions are two-fold. First, we make gains by placing a hierarchical prior over the encoding distribution, enabling us to adaptively balance the trade-off between minimizing the reconstruction loss function and avoiding over-regularization.
arXiv Detail & Related papers (2020-07-20T18:18:39Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z) - Feature Quantization Improves GAN Training [126.02828112121874]
Feature Quantization (FQ) for the discriminator embeds both true and fake data samples into a shared discrete space.
Our method can be easily plugged into existing GAN models, with little computational overhead in training.
arXiv Detail & Related papers (2020-04-05T04:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.