Transgender Community Sentiment Analysis from Social Media Data: A
Natural Language Processing Approach
- URL: http://arxiv.org/abs/2010.13062v2
- Date: Wed, 20 Jul 2022 04:54:28 GMT
- Title: Transgender Community Sentiment Analysis from Social Media Data: A
Natural Language Processing Approach
- Authors: Yuqiao Liu, Yudan Wang, Ying Zhao and Zhixiang Li
- Abstract summary: Transgender community is experiencing a huge disparity in mental health conditions compared with the general population.
In this study, we manually categorize 300 social media comments posted by transgender people to the sentiment of negative, positive, and neutral.
- Score: 3.044968666863866
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Transgender community is experiencing a huge disparity in mental health
conditions compared with the general population. Interpreting the social medial
data posted by transgender people may help us understand the sentiments of
these sexual minority groups better and apply early interventions. In this
study, we manually categorize 300 social media comments posted by transgender
people to the sentiment of negative, positive, and neutral. 5 machine learning
algorithms and 2 deep neural networks are adopted to build sentiment analysis
classifiers based on the annotated data. Results show that our annotations are
reliable with a high Cohen's Kappa score over 0.8 across all three classes.
LSTM model yields an optimal performance of accuracy over 0.85 and AUC of
0.876. Our next step will focus on using advanced natural language processing
algorithms on a larger annotated dataset.
Related papers
- Agree to Disagree? A Meta-Evaluation of LLM Misgendering [84.77694174309183]
We conduct a systematic meta-evaluation of probability- and generation-based evaluation methods for misgendering.
By automatically evaluating a suite of 6 models from 3 families, we find that these methods can disagree with each other at the instance, dataset, and model levels.
We also show that misgendering behaviour is complex and goes far beyond pronouns, suggesting essential disagreement with human evaluations.
arXiv Detail & Related papers (2025-04-23T19:52:02Z) - Predictive Insights into LGBTQ+ Minority Stress: A Transductive Exploration of Social Media Discourse [0.0]
LGBTQ+ people are more likely to experience poorer health than their heterosexual and cisgender counterparts.
Minority stress is frequently expressed in posts on social media platforms.
We develop a hybrid model using Graph Neural Networks (GNN) and Bidirectional Representations from Transformers (BERT) to improve the classification performance of minority stress detection.
arXiv Detail & Related papers (2024-11-20T18:35:41Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Tokenization Matters: Navigating Data-Scarce Tokenization for Gender Inclusive Language Technologies [75.85462924188076]
Gender-inclusive NLP research has documented the harmful limitations of gender binary-centric large language models (LLM)
We find that misgendering is significantly influenced by Byte-Pair (BPE) tokenization.
We propose two techniques: (1) pronoun tokenization parity, a method to enforce consistent tokenization across gendered pronouns, and (2) utilizing pre-existing LLM pronoun knowledge to improve neopronoun proficiency.
arXiv Detail & Related papers (2023-12-19T01:28:46Z) - Auditing Gender Analyzers on Text Data [7.73812434373948]
We audit three existing gender analyzers -- uClassify, Readable and HackerFactor, for biases against non-binary individuals.
The tools are designed to predict only the cisgender binary labels, which leads to discrimination against non-binary members of the society.
To address this, we fine-tune a BERT multi-label classifier on the two datasets in multiple combinations.
arXiv Detail & Related papers (2023-10-09T18:13:07Z) - Automatically measuring speech fluency in people with aphasia: first
achievements using read-speech data [55.84746218227712]
This study aims at assessing the relevance of a signalprocessingalgorithm, initially developed in the field of language acquisition, for the automatic measurement of speech fluency.
arXiv Detail & Related papers (2023-08-09T07:51:40Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - Theories of "Gender" in NLP Bias Research [0.0]
We survey nearly 200 articles concerning gender bias in NLP.
We find that the majority of the articles do not make their theorization of gender explicit.
Many conflate sex characteristics, social gender, and linguistic gender in ways that disregard the existence and experience of trans, nonbinary, and intersex people.
arXiv Detail & Related papers (2022-05-05T09:20:53Z) - Research on Gender-related Fingerprint Features [3.0466371774923644]
We propose a more robust method, Dense Dilated Convolution ResNet (DDC-ResNet) to extract valid gender information from fingerprints.
By replacing the normal convolution operations with the atrous convolution in the backbone, prior knowledge is provided to keep the edge details and the global reception field can be extended.
Experimental results demonstrate that the combination of our approach outperforms other combinations in terms of average accuracy and separate-gender accuracy.
arXiv Detail & Related papers (2021-08-18T16:54:34Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Gender Classification and Bias Mitigation in Facial Images [7.438105108643341]
Recent research showed that algorithms trained on biased benchmark databases could result in algorithmic bias.
We conducted surveys on existing benchmark databases for facial recognition and gender classification tasks.
We worked to increase classification accuracy and mitigate algorithmic biases on our baseline model trained on the augmented benchmark database.
arXiv Detail & Related papers (2020-07-13T01:09:06Z) - A Framework for the Computational Linguistic Analysis of Dehumanization [52.735780962665814]
We analyze discussions of LGBTQ people in the New York Times from 1986 to 2015.
We find increasingly humanizing descriptions of LGBTQ people over time.
The ability to analyze dehumanizing language at a large scale has implications for automatically detecting and understanding media bias as well as abusive language online.
arXiv Detail & Related papers (2020-03-06T03:02:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.