Much Ado About Gender: Current Practices and Future Recommendations for
Appropriate Gender-Aware Information Access
- URL: http://arxiv.org/abs/2301.04780v2
- Date: Fri, 13 Jan 2023 23:06:53 GMT
- Title: Much Ado About Gender: Current Practices and Future Recommendations for
Appropriate Gender-Aware Information Access
- Authors: Christine Pinney, Amifa Raj, Alex Hanna, and Michael D. Ekstrand
- Abstract summary: Information access research (and development) sometimes makes use of gender.
This work makes a variety of assumptions about gender that are not aligned with current understandings of what gender is.
Most papers we review rely on a binary notion of gender, even if they acknowledge that gender cannot be split into two categories.
- Score: 3.3903891679981593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Information access research (and development) sometimes makes use of gender,
whether to report on the demographics of participants in a user study, as
inputs to personalized results or recommendations, or to make systems
gender-fair, amongst other purposes. This work makes a variety of assumptions
about gender, however, that are not necessarily aligned with current
understandings of what gender is, how it should be encoded, and how a gender
variable should be ethically used. In this work, we present a systematic review
of papers on information retrieval and recommender systems that mention gender
in order to document how gender is currently being used in this field. We find
that most papers mentioning gender do not use an explicit gender variable, but
most of those that do either focus on contextualizing results of model
performance, personalizing a system based on assumptions of user gender, or
auditing a model's behavior for fairness or other privacy-related issues.
Moreover, most of the papers we review rely on a binary notion of gender, even
if they acknowledge that gender cannot be split into two categories. We connect
these findings with scholarship on gender theory and recent work on gender in
human-computer interaction and natural language processing. We conclude by
making recommendations for ethical and well-grounded use of gender in building
and researching information access systems.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Probing Explicit and Implicit Gender Bias through LLM Conditional Text
Generation [64.79319733514266]
Large Language Models (LLMs) can generate biased and toxic responses.
We propose a conditional text generation mechanism without the need for predefined gender phrases and stereotypes.
arXiv Detail & Related papers (2023-11-01T05:31:46Z) - Inferring gender from name: a large scale performance evaluation study [4.934579134540613]
Researchers need to infer gender from readily available information, primarily from persons' names.
Name-to-gender inference has generated an ever-growing domain of algorithmic approaches and software products.
We conduct a large scale performance evaluation of existing approaches for name-to-gender inference.
We propose two new hybrid approaches that achieve better performance than any single existing approach.
arXiv Detail & Related papers (2023-08-22T13:38:45Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Theories of "Gender" in NLP Bias Research [0.0]
We survey nearly 200 articles concerning gender bias in NLP.
We find that the majority of the articles do not make their theorization of gender explicit.
Many conflate sex characteristics, social gender, and linguistic gender in ways that disregard the existence and experience of trans, nonbinary, and intersex people.
arXiv Detail & Related papers (2022-05-05T09:20:53Z) - A Survey on Gender Bias in Natural Language Processing [22.91475787277623]
We present a survey of 304 papers on gender bias in natural language processing.
We compare and contrast approaches to detecting and mitigating gender bias.
We find that research on gender bias suffers from four core limitations.
arXiv Detail & Related papers (2021-12-28T14:54:18Z) - Gendered Language in Resumes and its Implications for Algorithmic Bias
in Hiring [0.0]
We train a series of models to classify the gender of the applicant.
We investigate whether it is possible to obfuscate gender from resumes.
We find that there is a significant amount of gendered information in resumes even after obfuscation.
arXiv Detail & Related papers (2021-12-16T14:26:36Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.