Equalized Generative Treatment: Matching f-divergences for Fairness in Generative Models
- URL: http://arxiv.org/abs/2602.08660v1
- Date: Mon, 09 Feb 2026 13:52:36 GMT
- Title: Equalized Generative Treatment: Matching f-divergences for Fairness in Generative Models
- Authors: Alexandre Verine, Rafael Pinot, Florian Le Bronnec,
- Abstract summary: We introduce a new fairness definition for generative models, termed as equalized generative treatment (EGT)<n>EGT requires comparable generation quality across all sensitive groups, with quality measured via a reference f-divergence.<n>We show that min-max methods consistently achieve fairer outcomes compared to other approaches from the literature.
- Score: 49.094293060279675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness is a crucial concern for generative models, which not only reflect but can also amplify societal and cultural biases. Existing fairness notions for generative models are largely adapted from classification and focus on balancing the probability of generating samples from each sensitive group. We show that such criteria are brittle, as they can be met even when different sensitive groups are modeled with widely varying quality. To address this limitation, we introduce a new fairness definition for generative models, termed as equalized generative treatment (EGT), which requires comparable generation quality across all sensitive groups, with quality measured via a reference f-divergence. We further analyze the trade-offs induced by EGT, demonstrating that enforcing fairness constraints necessarily couples the overall model quality to that of the most challenging group to approximate. This indicates that a simple yet efficient min-max fine-tuning method should be able to balance f-divergences across sensitive groups to satisfy EGT. We validate this theoretical insight through a set of experiments on both image and text generation tasks. We demonstrate that min-max methods consistently achieve fairer outcomes compared to other approaches from the literature, while maintaining competitive overall performance for both tasks.
Related papers
- On the use of graph models to achieve individual and group fairness [0.6299766708197883]
We provide a theoretical framework based on Sheaf Diffusion to leverage tools based on dynamical systems and homology to model fairness.<n>We present a collection of network topologies handling different fairness metrics, leading to a unified method capable of dealing with both individual and group bias.<n>The paper showcases the performance of the proposed models in terms of accuracy and fairness.
arXiv Detail & Related papers (2026-01-13T18:17:43Z) - SONA: Learning Conditional, Unconditional, and Mismatching-Aware Discriminator [54.562217603802075]
We introduce Sum of Naturalness and Alignment (SONA), which employs separate projections for naturalness (authenticity) and alignment in the final layer with an inductive bias.<n>Experiments on class-conditional generation tasks show thatSONA achieves superior sample quality and conditional alignment compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-10-06T08:26:06Z) - FedFACT: A Provable Framework for Controllable Group-Fairness Calibration in Federated Learning [23.38141950440522]
We propose a controllable federated group-fairness calibration framework, named FedFACT.<n>FedFACT identifies the Bayes-optimal classifiers under both global and local fairness constraints.<n>We show that FedFACT consistently outperforms baselines in balancing accuracy and global-local fairness.
arXiv Detail & Related papers (2025-06-04T09:39:57Z) - Your Classifier Can Do More: Towards Bridging the Gaps in Classification, Robustness, and Generation [18.149950949071982]
We study the energy distribution differences of clean, adversarial, and generated samples across various JEM variants and adversarially trained models.<n>We propose Energy-based Joint Distribution Adrialversa Training to jointly model the clean data distribution, the adversarial distribution, and the classifier.
arXiv Detail & Related papers (2025-05-26T03:26:55Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models [56.88106830869487]
We introduce equi-tuning, a novel fine-tuning method that transforms (potentially non-equivariant) pretrained models into group equivariant models.
We provide applications of equi-tuning on three different tasks: image classification, compositional generalization in language, and fairness in natural language generation.
arXiv Detail & Related papers (2022-10-13T08:45:23Z) - RepFair-GAN: Mitigating Representation Bias in GANs Using Gradient
Clipping [2.580765958706854]
We define a new fairness notion for generative models in terms of the distribution of generated samples sharing the same protected attributes.
We show that this fairness notion is violated even when the dataset contains equally represented groups.
We show that controlling the groups' gradient norm by performing group-wise gradient norm clipping in the discriminator leads to a more fair data generation.
arXiv Detail & Related papers (2022-07-13T14:58:48Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.