Women are less comfortable expressing opinions online than men and report heightened fears for safety: Surveying gender differences in experiences of online harms
- URL: http://arxiv.org/abs/2403.19037v1
- Date: Wed, 27 Mar 2024 22:16:03 GMT
- Title: Women are less comfortable expressing opinions online than men and report heightened fears for safety: Surveying gender differences in experiences of online harms
- Authors: Francesca Stevens, Florence E. Enock, Tvesha Sippy, Jonathan Bright, Miranda Cross, Pica Johansson, Judy Wajcman, Helen Z. Margetts,
- Abstract summary: Women are significantly more fearful of being targeted by harms overall.
They report greater negative psychological impact as a result of particular experiences.
Women report higher use of a range of safety tools and less comfort with several forms of online participation.
- Score: 0.7916214711737172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online harms, such as hate speech, trolling and self-harm promotion, continue to be widespread. While some work suggests women are disproportionately affected, other studies find mixed evidence for gender differences in experiences with content of this kind. Using a nationally representative survey of UK adults (N=1992), we examine exposure to a variety of harms, fears surrounding being targeted, the psychological impact of online experiences, the use of safety tools to protect against harm, and comfort with various forms of online participation across men and women. We find that while men and women see harmful content online to a roughly similar extent, women are more at risk than men of being targeted by harms including online misogyny, cyberstalking and cyberflashing. Women are significantly more fearful of being targeted by harms overall, and report greater negative psychological impact as a result of particular experiences. Perhaps in an attempt to mitigate risk, women report higher use of a range of safety tools and less comfort with several forms of online participation, with just 23% of women comfortable expressing political views online compared to 40% of men. We also find direct associations between fears surrounding harms and comfort with online behaviours. For example, fear of being trolled significantly decreases comfort expressing opinions, and fear of being targeted by misogyny significantly decreases comfort sharing photos. Our results are important because with much public discourse happening online, we must ensure all members of society feel safe and able to participate in online spaces.
Related papers
- Social bias is prevalent in user reports of hate and abuse online [2.0507758560052207]
We examine the extent of social bias in the flagging of hate and abuse in four different intergroup contexts.<n>Overall, participants reported abuse reliably, with approximately half of the abusive comments in each study reported.<n>However, a pervasive social bias was present whereby ingroup-directed abuse was consistently flagged to a greater extent than outgroup-directed abuse.
arXiv Detail & Related papers (2025-10-06T12:27:19Z) - Understanding gender differences in experiences and concerns surrounding
online harms: A short report on a nationally representative survey of UK
adults [0.8567685792108676]
We present preliminary results from a large, nationally representative survey of UK adults.
We ask about exposure to 15 specific harms, along with fears surrounding exposure and comfort engaging in certain online behaviours.
We find that women are significantly more fearful of experiencing every type of harm that we asked about, and are significantly less comfortable partaking in several online behaviours.
arXiv Detail & Related papers (2024-02-01T10:10:52Z) - Understanding engagement with platform safety technology for reducing
exposure to online harms [1.0228192660021962]
We show that experience of online harms is widespread, with 67% of people having seen what they perceived as harmful content online.
We show that use of safety technologies is high, with more than 80% of people having used at least one.
People who have previously seen online harms are more likely to use safety tools, implying a 'learning the hard way' route to engagement.
arXiv Detail & Related papers (2024-01-03T15:50:43Z) - Sex differences in attitudes towards online privacy and anonymity among
Israeli students with different technical backgrounds [0.6445605125467572]
Our aim was to comparatively model men and women's online privacy attitudes.
Various factors related to the user's online privacy and anonymity were considered.
Users' tendency to engage in privacy paradox behaviour was not higher among men despite their higher level of technological online privacy literacy compared to women.
arXiv Detail & Related papers (2023-08-07T12:36:37Z) - Beyond Fish and Bicycles: Exploring the Varieties of Online Women's
Ideological Spaces [12.429096784949952]
We perform a large-scale, data-driven analysis of over 6M Reddit comments and submissions from 14 subreddits.
We elicit a diverse taxonomy of online women's ideological spaces, ranging from the so-called Manosphere to Gender-Critical Feminism.
We shed light on two platforms, ovarit.com and thepinkpill.co, where two toxic communities of online women's ideological spaces migrated after their ban on Reddit.
arXiv Detail & Related papers (2023-03-13T13:39:45Z) - Gendered Mental Health Stigma in Masked Language Models [38.766854150355634]
We investigate gendered mental health stigma in masked language models.
We find that models are consistently more likely to predict female subjects than male in sentences about having a mental health condition.
arXiv Detail & Related papers (2022-10-27T03:09:46Z) - DISARM: Detecting the Victims Targeted by Harmful Memes [49.12165815990115]
DISARM is a framework that uses named entity recognition and person identification to detect harmful memes.
We show that DISARM significantly outperforms ten unimodal and multimodal systems.
It can reduce the relative error rate for harmful target identification by up to 9 points absolute over several strong multimodal rivals.
arXiv Detail & Related papers (2022-05-11T19:14:26Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - "I feel invaded, annoyed, anxious and I may protect myself":
Individuals' Feelings about Online Tracking and their Protective Behaviour
across Gender and Country [11.38723572165938]
Online tracking is a primary concern for Internet users, yet previous research has not found a clear link between the cognitive understanding of tracking and protective actions.
We conducted an online study, with N=614 participants, across the UK, Germany and France, to investigate how users feel about third-party tracking.
We found that most participants' feelings about tracking were negative, described as deeply intrusive.
We also observed indications of a privacy gender gap', where women feel more negatively about tracking, yet are less likely to take protective actions, compared to men.
arXiv Detail & Related papers (2022-02-09T19:08:14Z) - #ContextMatters: Advantages and Limitations of Using Machine Learning to
Support Women in Politics [0.15749416770494704]
ParityBOT was deployed across elections in Canada, the United States and New Zealand.
It was used to analyse and classify more than 12 million tweets directed at women candidates and counter toxic tweets with supportive ones.
We examine the rate of false negatives, where ParityBOT failed to pick up on insults directed at specific high profile women.
arXiv Detail & Related papers (2021-09-30T22:55:49Z) - The Spread of Propaganda by Coordinated Communities on Social Media [43.2770127582382]
We analyze the spread of propaganda and its interplay with coordinated behavior on a large Twitter dataset about the 2019 UK general election.
The combination of the use of propaganda and coordinated behavior allows us to uncover the authenticity and harmfulness of the different communities.
arXiv Detail & Related papers (2021-09-27T13:39:10Z) - Countering Online Hate Speech: An NLP Perspective [34.19875714256597]
Online toxicity - an umbrella term for online hateful behavior - manifests itself in forms such as online hate speech.
The rising mass communication through social media further exacerbates the harmful consequences of online hate speech.
This paper presents a holistic conceptual framework on hate-speech NLP countering methods along with a thorough survey on the current progress of NLP for countering online hate speech.
arXiv Detail & Related papers (2021-09-07T08:48:13Z) - Survey of Cyber Violence Against Women in Malawi [0.0]
The purpose of this study was to investigate the prevalence of cyber violence against women in Karonga district of Malawi.
The study noted that women experienced various forms of cyber violence such as cyber bullying, cyber harassment, online defamation, cyberstalking, sexual exploitation, online hate speech, and revenge pornography.
It was found that women never bothered to report the incidences to the police or community to seek for support due to lack of awareness, cultural and patriarchal factors.
arXiv Detail & Related papers (2021-08-22T18:02:06Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z) - Analyzing COVID-19 on Online Social Media: Trends, Sentiments and
Emotions [44.92240076313168]
We analyze the affective trajectories of the American people and the Chinese people based on Twitter and Weibo posts between January 20th, 2020 and May 11th 2020.
By contrasting two very different countries, China and the Unites States, we reveal sharp differences in people's views on COVID-19 in different cultures.
Our study provides a computational approach to unveiling public emotions and concerns on the pandemic in real-time, which would potentially help policy-makers better understand people's need and thus make optimal policy.
arXiv Detail & Related papers (2020-05-29T09:24:38Z) - #MeToo on Campus: Studying College Sexual Assault at Scale Using Data
Reported on Social Media [71.74529365205053]
We analyze the influence of the # trend on a pool of college followers.
The results show that the majority of topics embedded in those # tweets detail sexual harassment stories.
There exists a significant correlation between the prevalence of this trend and official reports on several major geographical regions.
arXiv Detail & Related papers (2020-01-16T18:05:46Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.