Exploring Gender Biases in Language Patterns of Human-Conversational
Agent Conversations
- URL: http://arxiv.org/abs/2401.03030v1
- Date: Fri, 5 Jan 2024 19:11:17 GMT
- Title: Exploring Gender Biases in Language Patterns of Human-Conversational
Agent Conversations
- Authors: Weizi Liu
- Abstract summary: This research aims to delve deeper into the impacts of gender biases in human-CA interactions.
It aims to understand how pre-existing gender biases might be triggered by CAs' gender designs.
The findings aim to inform ethical design of conversational agents, addressing whether gender assignment in CAs is appropriate.
- Score: 1.0878040851638
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rise of human-machine communication, machines are increasingly
designed with humanlike characteristics, such as gender, which can
inadvertently trigger cognitive biases. Many conversational agents (CAs), such
as voice assistants and chatbots, default to female personas, leading to
concerns about perpetuating gender stereotypes and inequality. Critiques have
emerged regarding the potential objectification of females and reinforcement of
gender stereotypes by these technologies. This research, situated in
conversational AI design, aims to delve deeper into the impacts of gender
biases in human-CA interactions. From a behavioral and communication research
standpoint, this program focuses not only on perceptions but also the
linguistic styles of users when interacting with CAs, as previous research has
rarely explored. It aims to understand how pre-existing gender biases might be
triggered by CAs' gender designs. It further investigates how CAs' gender
designs may reinforce gender biases and extend them to human-human
communication. The findings aim to inform ethical design of conversational
agents, addressing whether gender assignment in CAs is appropriate and how to
promote gender equality in design.
Related papers
- Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) [82.57490175399693]
We study gender bias in 22 popular image-to-text vision-language assistants (VLAs)
Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances.
To eliminate the gender bias in these models, we find that finetuning-based debiasing methods achieve the best tradeoff between debiasing and retaining performance on downstream tasks.
arXiv Detail & Related papers (2024-10-25T05:59:44Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Generalizing Fairness to Generative Language Models via Reformulation of Non-discrimination Criteria [4.738231680800414]
This paper studies how to uncover and quantify the presence of gender biases in generative language models.
We derive generative AI analogues of three well-known non-discrimination criteria from classification, namely independence, separation and sufficiency.
Our results address the presence of occupational gender bias within such conversational language models.
arXiv Detail & Related papers (2024-03-13T14:19:08Z) - How To Build Competitive Multi-gender Speech Translation Models For
Controlling Speaker Gender Translation [21.125217707038356]
When translating from notional gender languages into grammatical gender languages, the generated translation requires explicit gender assignments for various words, including those referring to the speaker.
To avoid such biased and not inclusive behaviors, the gender assignment of speaker-related expressions should be guided by externally-provided metadata about the speaker's gender.
This paper aims to achieve the same results by integrating the speaker's gender metadata into a single "multi-gender" neural ST model, easier to maintain.
arXiv Detail & Related papers (2023-10-23T17:21:32Z) - Unveiling Gender Bias in Terms of Profession Across LLMs: Analyzing and
Addressing Sociological Implications [0.0]
The study examines existing research on gender bias in AI language models and identifies gaps in the current knowledge.
The findings shed light on gendered word associations, language usage, and biased narratives present in the outputs of Large Language Models.
The paper presents strategies for reducing gender bias in LLMs, including algorithmic approaches and data augmentation techniques.
arXiv Detail & Related papers (2023-07-18T11:38:45Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Interacting with Masculinities: A Scoping Review [13.32560004325655]
We must recognize the genderful nature of humanity, acknowledge the evasiveness of men and masculinities, and avoid burdening women and genderful folk as the central actors and targets of change.
I present a 30-year history of masculinities in HCI work through a scoping review of 126 papers published to the ACM Human Factors in Computing Systems conference proceedings.
arXiv Detail & Related papers (2023-04-22T08:51:41Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.