Text-mining forma mentis networks reconstruct public perception of the
STEM gender gap in social media
- URL: http://arxiv.org/abs/2003.08835v1
- Date: Wed, 18 Mar 2020 13:39:23 GMT
- Title: Text-mining forma mentis networks reconstruct public perception of the
STEM gender gap in social media
- Authors: Massimo Stella
- Abstract summary: Textual forma mentis networks (TFMNs) are glass boxes introduced for extracting, representing and understanding mindsets' structure.
TFMNs were applied to the case study of the gender gap in science, which was strongly linked to distorted mindsets by recent studies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mindset reconstruction maps how individuals structure and perceive knowledge,
a map unfolded here by investigating language and its cognitive reflection in
the human mind, i.e. the mental lexicon. Textual forma mentis networks (TFMN)
are glass boxes introduced for extracting, representing and understanding
mindsets' structure, in Latin "forma mentis", from textual data. Combining
network science, psycholinguistics and Big Data, TFMNs successfully identified
relevant concepts, without supervision, in benchmark texts. Once validated,
TFMNs were applied to the case study of the gender gap in science, which was
strongly linked to distorted mindsets by recent studies. Focusing over social
media perception and online discourse, this work analysed 10,000 relevant
tweets. "Gender" and "gap" elicited a mostly positive perception, with a
trustful/joyous emotional profile and semantic associates that: celebrated
successful female scientists, related gender gap to wage differences, and hoped
for a future resolution. The perception of "woman" highlighted discussion about
sexual harassment and stereotype threat (a form of implicit cognitive bias)
relative to women in science "sacrificing personal skills for success". The
reconstructed perception of "man" highlighted social users' awareness of the
myth of male superiority in science. No anger was detected around "person",
suggesting that gap-focused discourse got less tense around genderless terms.
No stereotypical perception of "scientist" was identified online, differently
from real-world surveys. The overall analysis identified the online discourse
as promoting a mostly stereotype-free, positive/trustful perception of gender
disparity, aware of implicit/explicit biases and projected to closing the gap.
TFMNs opened new ways for investigating perceptions in different groups,
offering detailed data-informed grounding for policy making.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - A Holistic Indicator of Polarization to Measure Online Sexism [2.498836880652668]
The online trend of the manosphere and feminist discourse on social networks requires a holistic measure of the level of sexism in an online community.
This indicator is important for policymakers and moderators of online communities.
We build a model that can provide a comparable holistic indicator of toxicity targeted toward male and female identity and male and female individuals.
arXiv Detail & Related papers (2024-04-02T18:00:42Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - Beyond Fish and Bicycles: Exploring the Varieties of Online Women's
Ideological Spaces [12.429096784949952]
We perform a large-scale, data-driven analysis of over 6M Reddit comments and submissions from 14 subreddits.
We elicit a diverse taxonomy of online women's ideological spaces, ranging from the so-called Manosphere to Gender-Critical Feminism.
We shed light on two platforms, ovarit.com and thepinkpill.co, where two toxic communities of online women's ideological spaces migrated after their ban on Reddit.
arXiv Detail & Related papers (2023-03-13T13:39:45Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - Theories of "Gender" in NLP Bias Research [0.0]
We survey nearly 200 articles concerning gender bias in NLP.
We find that the majority of the articles do not make their theorization of gender explicit.
Many conflate sex characteristics, social gender, and linguistic gender in ways that disregard the existence and experience of trans, nonbinary, and intersex people.
arXiv Detail & Related papers (2022-05-05T09:20:53Z) - Towards Gender-Neutral Face Descriptors for Mitigating Bias in Face
Recognition [51.856693288834975]
State-of-the-art deep networks implicitly encode gender information while being trained for face recognition.
Gender is often viewed as an important attribute with respect to identifying faces.
We present a novel Adversarial Gender De-biasing algorithm (AGENDA)' to reduce the gender information present in face descriptors.
arXiv Detail & Related papers (2020-06-14T08:54:03Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z) - Unsupervised Discovery of Implicit Gender Bias [38.59057512390926]
We take an unsupervised approach to identifying gender bias against women at a comment level.
Our main challenge is forcing the model to focus on signs of implicit bias, rather than other artifacts in the data.
arXiv Detail & Related papers (2020-04-17T17:36:20Z) - A Framework for the Computational Linguistic Analysis of Dehumanization [52.735780962665814]
We analyze discussions of LGBTQ people in the New York Times from 1986 to 2015.
We find increasingly humanizing descriptions of LGBTQ people over time.
The ability to analyze dehumanizing language at a large scale has implications for automatically detecting and understanding media bias as well as abusive language online.
arXiv Detail & Related papers (2020-03-06T03:02:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.