Hellinger loss function for Generative Adversarial Networks
- URL: http://arxiv.org/abs/2512.12267v1
- Date: Sat, 13 Dec 2025 10:18:27 GMT
- Title: Hellinger loss function for Generative Adversarial Networks
- Authors: Giovanni Saraceno, Anand N. Vidyashankar, Claudio Agostinelli,
- Abstract summary: We propose Hellinger-type loss functions for training Generative Adversarial Networks (GANs)<n>Motivated by boundedness, symmetry, and robustness properties of the Hellinger distance, we study statistical properties within a general parametric framework.<n>We demonstrate that both proposed losses yield improved estimation accuracy and robustness under increasing levels of data contamination.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose Hellinger-type loss functions for training Generative Adversarial Networks (GANs), motivated by the boundedness, symmetry, and robustness properties of the Hellinger distance. We define an adversarial objective based on this divergence and study its statistical properties within a general parametric framework. We establish the existence, uniqueness, consistency, and joint asymptotic normality of the estimators obtained from the adversarial training procedure. In particular, we analyze the joint estimation of both generator and discriminator parameters, offering a comprehensive asymptotic characterization of the resulting estimators. We introduce two implementations of the Hellinger-type loss and we evaluate their empirical behavior in comparison with the classic (Maximum Likelihood-type) GAN loss. Through a controlled simulation study, we demonstrate that both proposed losses yield improved estimation accuracy and robustness under increasing levels of data contamination.
Related papers
- Nonparametric Identification and Inference for Counterfactual Distributions with Confounding [6.997978440999076]
We propose nonparametric identification and semiparametric estimation of joint potential outcome in the presence of confounding.<n>By bridging classical semiparametric theory with modern representation learning, this work provides a robust statistical foundation for distributional and counterfactual inference in complex causal systems.
arXiv Detail & Related papers (2026-02-17T05:00:13Z) - Penalized Empirical Likelihood for Doubly Robust Causal Inference under Contamination in High Dimensions [0.720409153108429]
We propose a doubly robust estimator for the average treatment effect in low sample size equations.<n>We show that the proposed confidence interval remain efficient compared to those competing estimates.
arXiv Detail & Related papers (2025-07-23T11:58:54Z) - Wasserstein Distributionally Robust Nonparametric Regression [9.65010022854885]
This paper studies the generalization properties of Wasserstein distributionally robust nonparametric estimators.<n>We establish non-asymptotic error bounds for the excess local worst-case risk.<n>The robustness of the proposed estimator is evaluated through simulation studies and illustrated with an application to the MNIST dataset.
arXiv Detail & Related papers (2025-05-12T18:07:37Z) - Semiparametric conformal prediction [79.6147286161434]
We construct a conformal prediction set accounting for the joint correlation structure of the vector-valued non-conformity scores.<n>We flexibly estimate the joint cumulative distribution function (CDF) of the scores.<n>Our method yields desired coverage and competitive efficiency on a range of real-world regression problems.
arXiv Detail & Related papers (2024-11-04T14:29:02Z) - Flexible Nonparametric Inference for Causal Effects under the Front-Door Model [2.6900047294457683]
We develop novel one-step and targeted minimum loss-based estimators for both the average treatment effect and the average treatment effect on the treated under front-door assumptions.<n>Our estimators are built on multiple parameterizations of the observed data distribution, including approaches that avoid mediator density entirely.<n>We show how these constraints can be leveraged to improve the efficiency of causal effect estimators.
arXiv Detail & Related papers (2023-12-15T22:04:53Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Identification and multiply robust estimation in causal mediation analysis across principal strata [7.801213477601286]
We consider assessing causal mediation in the presence of a post-treatment event.
We derive the efficient influence function for each mediation estimand, which motivates a set of multiply robust estimators for inference.
arXiv Detail & Related papers (2023-04-20T00:39:20Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Asymptotic Behavior of Adversarial Training in Binary Classification [41.7567932118769]
Adversarial training is considered to be the state-of-the-art method for defense against adversarial attacks.
Despite being successful in practice, several problems in understanding performance of adversarial training remain open.
We derive precise theoretical predictions for the minimization of adversarial training in binary classification.
arXiv Detail & Related papers (2020-10-26T01:44:20Z) - Causal Inference of General Treatment Effects using Neural Networks with
A Diverging Number of Confounders [12.105996764226227]
Under the unconfoundedness condition, adjustment for confounders requires estimating the nuisance functions relating outcome or treatment to confounders nonparametrically.
This paper considers a generalized optimization framework for efficient estimation of general treatment effects using artificial neural networks (ANNs) to approximate the unknown nuisance function of growing-dimensional confounders.
arXiv Detail & Related papers (2020-09-15T13:07:24Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.