Sex and Gender in the Computer Graphics Research Literature
- URL: http://arxiv.org/abs/2206.00480v1
- Date: Wed, 1 Jun 2022 13:24:17 GMT
- Title: Sex and Gender in the Computer Graphics Research Literature
- Authors: Ana Dodik, Silvia Sell\'an, Theodore Kim, Amanda Phillips
- Abstract summary: We survey the treatment of sex and gender in the Computer Graphics research literature from an algorithmic fairness perspective.
The established practices on the use of gender and sex in our community are scientifically incorrect and constitute a form of algorithmic bias with potential harmful effects.
- Score: 4.05984965639419
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We survey the treatment of sex and gender in the Computer Graphics research
literature from an algorithmic fairness perspective. The established practices
on the use of gender and sex in our community are scientifically incorrect and
constitute a form of algorithmic bias with potential harmful effects. We
propose ways of addressing these as technical limitations.
Related papers
- Divided by discipline? A systematic literature review on the quantification of online sexism and misogyny using a semi-automated approach [1.1599570446840546]
We present a semi-automated way to narrow down the search results in the different phases of selection stage in the PRISMA flowchart.
We examine literature from computer science and the social sciences from 2012 to 2022.
We discuss the challenges and opportunities for future research dedicated to measuring online sexism and misogyny.
arXiv Detail & Related papers (2024-09-30T11:34:39Z) - Beyond the binary: Limitations and possibilities of gender-related speech technology research [0.4551615447454769]
This paper presents a review of 107 research papers relating to speech and sex or gender in ISCA Interspeech publications between 2013 and 2023.
We find that terminology, particularly the word gender, is used in ways that are underspecified and often out of step with the prevailing view in social sciences.
We draw attention to the potential problems that this can cause for already marginalised groups, and suggest some questions for researchers to ask themselves when undertaking work on speech and gender.
arXiv Detail & Related papers (2024-09-20T08:56:09Z) - A multitask learning framework for leveraging subjectivity of annotators to identify misogyny [47.175010006458436]
We propose a multitask learning approach to enhance the performance of the misogyny identification systems.
We incorporated diverse perspectives from annotators in our model design, considering gender and age across six profile groups.
This research advances content moderation and highlights the importance of embracing diverse perspectives to build effective online moderation systems.
arXiv Detail & Related papers (2024-06-22T15:06:08Z) - Unveiling Gender Bias in Terms of Profession Across LLMs: Analyzing and
Addressing Sociological Implications [0.0]
The study examines existing research on gender bias in AI language models and identifies gaps in the current knowledge.
The findings shed light on gendered word associations, language usage, and biased narratives present in the outputs of Large Language Models.
The paper presents strategies for reducing gender bias in LLMs, including algorithmic approaches and data augmentation techniques.
arXiv Detail & Related papers (2023-07-18T11:38:45Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Much Ado About Gender: Current Practices and Future Recommendations for
Appropriate Gender-Aware Information Access [3.3903891679981593]
Information access research (and development) sometimes makes use of gender.
This work makes a variety of assumptions about gender that are not aligned with current understandings of what gender is.
Most papers we review rely on a binary notion of gender, even if they acknowledge that gender cannot be split into two categories.
arXiv Detail & Related papers (2023-01-12T01:21:02Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - Gender Bias in Text: Labeled Datasets and Lexicons [0.30458514384586394]
There is a lack of gender bias datasets and lexicons for automating the detection of gender bias.
We provide labeled datasets and exhaustive lexicons by collecting, annotating, and augmenting relevant sentences.
The released datasets and lexicons span multiple bias subtypes including: Generic He, Generic She, Explicit Marking of Sex, and Gendered Neologisms.
arXiv Detail & Related papers (2022-01-21T12:44:51Z) - A Survey of Embedding Space Alignment Methods for Language and Knowledge
Graphs [77.34726150561087]
We survey the current research landscape on word, sentence and knowledge graph embedding algorithms.
We provide a classification of the relevant alignment techniques and discuss benchmark datasets used in this field of research.
arXiv Detail & Related papers (2020-10-26T16:08:13Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.