Girlhood Feminism as Soft Resistance: Affective Counterpublics and Algorithmic Negotiation on RedNote
- URL: http://arxiv.org/abs/2507.07059v1
- Date: Mon, 07 Jul 2025 20:12:24 GMT
- Title: Girlhood Feminism as Soft Resistance: Affective Counterpublics and Algorithmic Negotiation on RedNote
- Authors: Meng Liang, Xiaoyue Zhang, Linqi Ye,
- Abstract summary: Article focuses on the reappropriation of the hashtag Baby Supplementary Food (BSF), a female-dominated lifestyle app with over 300 million users.<n>We analyse how users create a female-centered counterpublic through self-infantilisation, algorithmic play, and aesthetic withdrawal.
- Score: 1.024113475677323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article explores how Chinese female users tactically mobilise platform features and hashtag practices to construct vernacular forms and an exclusive space of feminist resistance under algorithmic and cultural constraints. Focusing on the reappropriation of the hashtag Baby Supplementary Food (BSF), a female-dominated lifestyle app with over 300 million users, we analyse how users create a female-centered counterpublic through self-infantilisation, algorithmic play, and aesthetic withdrawal. Using the Computer-Assisted Learning and Measurement (CALM) framework, we analysed 1580 posts and propose the concept of girlhood feminism: an affective, culturally grounded form of soft resistance that refuses patriarchal life scripts without seeking direct confrontation or visibility. Rather than challenging censorship and misogyny directly, users rework platform affordances and domestic idioms to carve out emotional and symbolic spaces of dissent. Situated within the broader dynamics of East Asia's compressed modernity, this essay challenges liberal feminist paradigms grounded in confrontation and transparency. It advances a regionally grounded framework for understanding how gendered publics are navigated, negotiated, and quietly reimagined in algorithmically governed spaces.
Related papers
- The Effects of Demographic Instructions on LLM Personas [14.283869154967835]
Social media platforms must filter sexist content in compliance with governmental regulations.<n>Current machine learning approaches can reliably detect sexism based on standardized definitions.<n>We adopt a perspectivist approach, retaining diverse annotations rather than enforcing gold-standard labels.
arXiv Detail & Related papers (2025-05-17T02:49:15Z) - GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models [73.23743278545321]
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but have also been observed to magnify societal biases.<n>GenderCARE is a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics.
arXiv Detail & Related papers (2024-08-22T15:35:46Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - A Holistic Indicator of Polarization to Measure Online Sexism [2.498836880652668]
The online trend of the manosphere and feminist discourse on social networks requires a holistic measure of the level of sexism in an online community.
This indicator is important for policymakers and moderators of online communities.
We build a model that can provide a comparable holistic indicator of toxicity targeted toward male and female identity and male and female individuals.
arXiv Detail & Related papers (2024-04-02T18:00:42Z) - Understanding writing style in social media with a supervised
contrastively pre-trained transformer [57.48690310135374]
Online Social Networks serve as fertile ground for harmful behavior, ranging from hate speech to the dissemination of disinformation.
We introduce the Style Transformer for Authorship Representations (STAR), trained on a large corpus derived from public sources of 4.5 x 106 authored texts.
Using a support base of 8 documents of 512 tokens, we can discern authors from sets of up to 1616 authors with at least 80% accuracy.
arXiv Detail & Related papers (2023-10-17T09:01:17Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - Beyond Fish and Bicycles: Exploring the Varieties of Online Women's
Ideological Spaces [12.429096784949952]
We perform a large-scale, data-driven analysis of over 6M Reddit comments and submissions from 14 subreddits.
We elicit a diverse taxonomy of online women's ideological spaces, ranging from the so-called Manosphere to Gender-Critical Feminism.
We shed light on two platforms, ovarit.com and thepinkpill.co, where two toxic communities of online women's ideological spaces migrated after their ban on Reddit.
arXiv Detail & Related papers (2023-03-13T13:39:45Z) - #ContextMatters: Advantages and Limitations of Using Machine Learning to
Support Women in Politics [0.15749416770494704]
ParityBOT was deployed across elections in Canada, the United States and New Zealand.
It was used to analyse and classify more than 12 million tweets directed at women candidates and counter toxic tweets with supportive ones.
We examine the rate of false negatives, where ParityBOT failed to pick up on insults directed at specific high profile women.
arXiv Detail & Related papers (2021-09-30T22:55:49Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z) - Black Feminist Musings on Algorithmic Oppression [0.0]
This paper unapologetically reflects on the critical role that Black feminism can and should play in abolishing algorithmic oppression.
I draw upon feminist philosophical critiques of science and technology and discuss histories and continuities of scientific oppression against historically marginalized people.
I end by inviting you to envision and imagine the struggle to abolish algorithmic oppression by abolishing oppressive systems and shifting algorithmic development practices.
arXiv Detail & Related papers (2021-01-25T03:04:05Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.