Fairness through Feedback: Addressing Algorithmic Misgendering in Automatic Gender Recognition
- URL: http://arxiv.org/abs/2506.02017v1
- Date: Wed, 28 May 2025 07:49:05 GMT
- Title: Fairness through Feedback: Addressing Algorithmic Misgendering in Automatic Gender Recognition
- Authors: Camilla Quaresmini, Giacomo Zanotti,
- Abstract summary: We suggest a theoretical and practical rethinking of Automatic Gender Recognition systems.<n>We build upon the observation that, unlike algorithmic misgendering, human-human misgendering is open to the possibility of re-evaluation and correction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic Gender Recognition (AGR) systems are an increasingly widespread application in the Machine Learning (ML) landscape. While these systems are typically understood as detecting gender, they often classify datapoints based on observable features correlated at best with either male or female sex. In addition to questionable binary assumptions, from an epistemological point of view, this is problematic for two reasons. First, there exists a gap between the categories the system is meant to predict (woman versus man) and those onto which their output reasonably maps (female versus male). What is more, gender cannot be inferred on the basis of such observable features. This makes AGR tools often unreliable, especially in the case of non-binary and gender non-conforming people. We suggest a theoretical and practical rethinking of AGR systems. To begin, distinctions are made between sex, gender, and gender expression. Then, we build upon the observation that, unlike algorithmic misgendering, human-human misgendering is open to the possibility of re-evaluation and correction. We suggest that analogous dynamics should be recreated in AGR, giving users the possibility to correct the system's output. While implementing such a feedback mechanism could be regarded as diminishing the system's autonomy, it represents a way to significantly increase fairness levels in AGR. This is consistent with the conceptual change of paradigm that we advocate for AGR systems, which should be understood as tools respecting individuals' rights and capabilities of self-expression and determination.
Related papers
- On the "Illusion" of Gender Bias in Face Recognition: Explaining the Fairness Issue Through Non-demographic Attributes [7.602456562464879]
Face recognition systems exhibit significant accuracy differences based on the user's gender.<n>We propose a toolchain to effectively decorrelate and aggregate facial attributes to enable a less-biased gender analysis.<n>Experiments show that the gender gap vanishes when images of male and female subjects share specific attributes.
arXiv Detail & Related papers (2025-01-21T10:21:19Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - "My Kind of Woman": Analysing Gender Stereotypes in AI through The Averageness Theory and EU Law [0.0]
This study delves into gender classification systems, shedding light on the interaction between social stereotypes and algorithmic determinations.
By incorporating cognitive psychology and feminist legal theory, we examine how data used for AI training can foster gender diversity and fairness.
arXiv Detail & Related papers (2024-06-27T20:03:27Z) - Hacking a surrogate model approach to XAI [49.1574468325115]
We show that even if a discriminated subgroup does not get a positive decision from the black box ADM system, the corresponding question of group membership can be pushed down onto a level as low as wanted.
Our approach can be generalized easily to other surrogate models.
arXiv Detail & Related papers (2024-06-24T13:18:02Z) - On the Encoding of Gender in Transformer-based ASR Representations [18.08250235967961]
This work investigates the encoding and utilization of gender in the latent representations of two ASR models, Wav2Vec2 and HuBERT.
Our analysis reveals a concentration of gender information within the first and last frames in the final layers, explaining the ease of erasing gender in these layers.
arXiv Detail & Related papers (2024-06-14T09:10:24Z) - The Gender-GAP Pipeline: A Gender-Aware Polyglot Pipeline for Gender
Characterisation in 55 Languages [51.2321117760104]
This paper describes the Gender-GAP Pipeline, an automatic pipeline to characterize gender representation in large-scale datasets for 55 languages.
The pipeline uses a multilingual lexicon of gendered person-nouns to quantify the gender representation in text.
We showcase it to report gender representation in WMT training data and development data for the News task, confirming that current data is skewed towards masculine representation.
arXiv Detail & Related papers (2023-08-31T17:20:50Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher
Mixture Model [7.049738935364298]
In this work, we investigate the gender bias of deep Face Recognition networks.
Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology.
In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias.
arXiv Detail & Related papers (2022-10-24T23:53:56Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.