Interacting with Masculinities: A Scoping Review
- URL: http://arxiv.org/abs/2304.13558v1
- Date: Sat, 22 Apr 2023 08:51:41 GMT
- Title: Interacting with Masculinities: A Scoping Review
- Authors: Katie Seaborn
- Abstract summary: We must recognize the genderful nature of humanity, acknowledge the evasiveness of men and masculinities, and avoid burdening women and genderful folk as the central actors and targets of change.
I present a 30-year history of masculinities in HCI work through a scoping review of 126 papers published to the ACM Human Factors in Computing Systems conference proceedings.
- Score: 13.32560004325655
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gender is a hot topic in the field of human-computer interaction (HCI). Work
has run the gamut, from assessing how we embed gender in our computational
creations to correcting systemic sexism, online and off. While gender is often
framed around women and femininities, we must recognize the genderful nature of
humanity, acknowledge the evasiveness of men and masculinities, and avoid
burdening women and genderful folk as the central actors and targets of change.
Indeed, critical voices have called for a shift in focus to masculinities, not
only in terms of privilege, power, and patriarchal harms, but also
participation, plurality, and transformation. To this end, I present a 30-year
history of masculinities in HCI work through a scoping review of 126 papers
published to the ACM Human Factors in Computing Systems (CHI) conference
proceedings. I offer a primer and agenda grounded in the CHI and extant
literatures to direct future work.
Related papers
- Masculine Defaults via Gendered Discourse in Podcasts and Large Language Models [17.48069194394518]
Masculine defaults involve three key parts: (i) the cultural context, (ii) the masculine characteristics or behaviors, and (iii) the reward for, or simply acceptance of, those masculine characteristics or behaviors.
We focus our study on podcasts, a popular and growing form of social media, analyzing 15,117 podcast episodes.
We study the prevalence of these gendered discourse words in domain-specific contexts, and find that gendered discourse-based masculine defaults exist in the domains of business, technology/politics, and video games.
arXiv Detail & Related papers (2025-04-15T17:41:54Z) - Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) [82.57490175399693]
We study gender bias in 22 popular image-to-text vision-language assistants (VLAs)
Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances.
To eliminate the gender bias in these models, we find that fine-tuning-based debiasing methods achieve the best trade-off between debiasing and retaining performance.
arXiv Detail & Related papers (2024-10-25T05:59:44Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Think Before You Act: A Two-Stage Framework for Mitigating Gender Bias Towards Vision-Language Tasks [5.123567809055078]
Gender bias in vision-language models (VLMs) can reinforce harmful stereotypes and discrimination.
We propose GAMA, a task-agnostic generation framework to mitigate gender bias.
During narrative generation, GAMA yields all-sided but gender-obfuscated narratives.
During answer inference, GAMA integrates the image, generated narrative, and a task-specific question prompt to infer answers for different vision-language tasks.
arXiv Detail & Related papers (2024-05-27T06:20:58Z) - Exploring Gender Biases in Language Patterns of Human-Conversational
Agent Conversations [1.0878040851638]
This research aims to delve deeper into the impacts of gender biases in human-CA interactions.
It aims to understand how pre-existing gender biases might be triggered by CAs' gender designs.
The findings aim to inform ethical design of conversational agents, addressing whether gender assignment in CAs is appropriate.
arXiv Detail & Related papers (2024-01-05T19:11:17Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - Transcending the "Male Code": Implicit Masculine Biases in NLP Contexts [10.036312061637764]
Critical scholarship has elevated the problem of gender bias in data sets used to train virtual assistants (VAs)
We examined two natural language processing (NLP) data sets.
We found that when gendered language was present, so were gender biases and especially masculine biases.
We offer a new dictionary called AVA that covers ambiguous associations between gendered language and the language of VAs.
arXiv Detail & Related papers (2023-04-22T03:53:24Z) - Much Ado About Gender: Current Practices and Future Recommendations for
Appropriate Gender-Aware Information Access [3.3903891679981593]
Information access research (and development) sometimes makes use of gender.
This work makes a variety of assumptions about gender that are not aligned with current understandings of what gender is.
Most papers we review rely on a binary notion of gender, even if they acknowledge that gender cannot be split into two categories.
arXiv Detail & Related papers (2023-01-12T01:21:02Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - Implicit Gender Bias in Computer Science -- A Qualitative Study [3.158346511479111]
Gender diversity in the tech sector is sufficient to create a balanced ratio of men and women.
For many women, access to computer science is hampered by socialization-related, social, cultural and structural obstacles.
The lack of contact in areas of computer science makes it difficult to develop or expand potential interests.
arXiv Detail & Related papers (2021-07-04T13:30:26Z) - Towards Gender-Neutral Face Descriptors for Mitigating Bias in Face
Recognition [51.856693288834975]
State-of-the-art deep networks implicitly encode gender information while being trained for face recognition.
Gender is often viewed as an important attribute with respect to identifying faces.
We present a novel Adversarial Gender De-biasing algorithm (AGENDA)' to reduce the gender information present in face descriptors.
arXiv Detail & Related papers (2020-06-14T08:54:03Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.