Fairness for Image Generation with Uncertain Sensitive Attributes
- URL: http://arxiv.org/abs/2106.12182v1
- Date: Wed, 23 Jun 2021 06:17:17 GMT
- Title: Fairness for Image Generation with Uncertain Sensitive Attributes
- Authors: Ajil Jalal and Sushrut Karmalkar and Jessica Hoffmann and Alexandros
G. Dimakis and Eric Price
- Abstract summary: This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
- Score: 97.81354305427871
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work tackles the issue of fairness in the context of generative
procedures, such as image super-resolution, which entail different definitions
from the standard classification setting. Moreover, while traditional group
fairness definitions are typically defined with respect to specified protected
groups -- camouflaging the fact that these groupings are artificial and carry
historical and political motivations -- we emphasize that there are no ground
truth identities. For instance, should South and East Asians be viewed as a
single group or separate groups? Should we consider one race as a whole or
further split by gender? Choosing which groups are valid and who belongs in
them is an impossible dilemma and being ``fair'' with respect to Asians may
require being ``unfair'' with respect to South Asians. This motivates the
introduction of definitions that allow algorithms to be \emph{oblivious} to the
relevant groupings.
We define several intuitive notions of group fairness and study their
incompatibilities and trade-offs. We show that the natural extension of
demographic parity is strongly dependent on the grouping, and \emph{impossible}
to achieve obliviously. On the other hand, the conceptually new definition we
introduce, Conditional Proportional Representation, can be achieved obliviously
through Posterior Sampling. Our experiments validate our theoretical results
and achieve fair image reconstruction using state-of-the-art generative models.
Related papers
- Perceptual Fairness in Image Restoration [34.50287066865267]
Group Perceptual Index (GPI) is a statistical distance between the distribution of the group's ground truth images and the distribution of their reconstructions.
We assess the fairness of an algorithm by comparing the GPI of different groups, and say that it achieves perfect Perceptual Fairness (PF) if the GPIs of all groups are identical.
arXiv Detail & Related papers (2024-05-22T16:32:20Z) - Fair Without Leveling Down: A New Intersectional Fairness Definition [1.0958014189747356]
We propose a new definition called the $alpha$-Intersectional Fairness, which combines the absolute and the relative performance across sensitive groups.
We benchmark multiple popular in-processing fair machine learning approaches using our new fairness definition and show that they do not achieve any improvement over a simple baseline.
arXiv Detail & Related papers (2023-05-21T16:15:12Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome
Homogenization? [90.35044668396591]
A recurring theme in machine learning is algorithmic monoculture: the same systems, or systems that share components, are deployed by multiple decision-makers.
We propose the component-sharing hypothesis: if decision-makers share components like training data or specific models, then they will produce more homogeneous outcomes.
We test this hypothesis on algorithmic fairness benchmarks, demonstrating that sharing training data reliably exacerbates homogenization.
We conclude with philosophical analyses of and societal challenges for outcome homogenization, with an eye towards implications for deployed machine learning systems.
arXiv Detail & Related papers (2022-11-25T09:33:11Z) - Characterization of Group-Fair Social Choice Rules under Single-Peaked
Preferences [0.5161531917413706]
We study fairness in social choice settings under single-peaked preferences.
We provide two separate characterizations of random social choice rules that satisfy group-fairness.
arXiv Detail & Related papers (2022-07-16T17:12:54Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Fairness Through Counterfactual Utilities [0.0]
Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems.
We provide a generalized set of group fairness definitions that unambiguously extend to all machine learning environments.
arXiv Detail & Related papers (2021-08-11T16:51:27Z) - Characterizing Intersectional Group Fairness with Worst-Case Comparisons [0.0]
We discuss why fairness metrics need to be looked at under the lens of intersectionality.
We suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics.
We conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
arXiv Detail & Related papers (2021-01-05T17:44:33Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.