Understanding engagement with platform safety technology for reducing
exposure to online harms
- URL: http://arxiv.org/abs/2401.01796v1
- Date: Wed, 3 Jan 2024 15:50:43 GMT
- Title: Understanding engagement with platform safety technology for reducing
exposure to online harms
- Authors: Jonathan Bright, Florence E. Enock, Pica Johansson, Helen Z. Margetts,
Francesca Stevens
- Abstract summary: We show that experience of online harms is widespread, with 67% of people having seen what they perceived as harmful content online.
We show that use of safety technologies is high, with more than 80% of people having used at least one.
People who have previously seen online harms are more likely to use safety tools, implying a 'learning the hard way' route to engagement.
- Score: 1.0228192660021962
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: User facing 'platform safety technology' encompasses an array of tools
offered by platforms to help people protect themselves from harm, for example
allowing people to report content and unfollow or block other users. These
tools are an increasingly important part of online safety: in the UK,
legislation has made it a requirement for large platforms to offer them.
However, little is known about user engagement with such tools. We present
findings from a nationally representative survey of UK adults covering their
awareness of and experiences with seven common safety technologies. We show
that experience of online harms is widespread, with 67% of people having seen
what they perceived as harmful content online; 26% of people have also had at
least one piece of content removed by content moderation. Use of safety
technologies is also high, with more than 80\% of people having used at least
one. Awareness of specific tools is varied, with people more likely to be aware
of 'post-hoc' safety tools, such as reporting, than preventative measures.
However, satisfaction with safety technologies is generally low. People who
have previously seen online harms are more likely to use safety tools, implying
a 'learning the hard way' route to engagement. Those higher in digital literacy
are also more likely to use some of these tools, raising concerns about the
accessibility of these technologies to all users. Additionally, women are more
likely to engage in particular types of online 'safety work'. We discuss the
implications of our results for those seeking a safer online environment.
Related papers
- How Unique is Whose Web Browser? The role of demographics in browser fingerprinting among US users [50.699390248359265]
Browser fingerprinting can be used to identify and track users across the Web, even without cookies.
This technique and resulting privacy risks have been studied for over a decade.
We provide a first-of-its-kind dataset to enable further research.
arXiv Detail & Related papers (2024-10-09T14:51:58Z) - Women are less comfortable expressing opinions online than men and report heightened fears for safety: Surveying gender differences in experiences of online harms [0.7916214711737172]
Women are significantly more fearful of being targeted by harms overall.
They report greater negative psychological impact as a result of particular experiences.
Women report higher use of a range of safety tools and less comfort with several forms of online participation.
arXiv Detail & Related papers (2024-03-27T22:16:03Z) - Factuality Challenges in the Era of Large Language Models [113.3282633305118]
Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
arXiv Detail & Related papers (2023-10-08T14:55:02Z) - User Attitudes to Content Moderation in Web Search [49.1574468325115]
We examine the levels of support for different moderation practices applied to potentially misleading and/or potentially offensive content in web search.
We find that the most supported practice is informing users about potentially misleading or offensive content, and the least supported one is the complete removal of search results.
More conservative users and users with lower levels of trust in web search results are more likely to be against content moderation in web search.
arXiv Detail & Related papers (2023-10-05T10:57:15Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - PROVENANCE: An Intermediary-Free Solution for Digital Content
Verification [3.82273842587301]
Provenance warns users when the content they are looking at may be misinformation or disinformation.
It is also designed to improve media literacy among its users.
Unlike similar plugins, which require human experts to provide evaluations, Provenance's state of the art technology does not require human input.
arXiv Detail & Related papers (2021-11-16T21:42:23Z) - Cybersecurity Misinformation Detection on Social Media: Case Studies on
Phishing Reports and Zoom's Threats [1.2387676601792899]
We propose novel approaches for detecting misinformation about cybersecurity and privacy threats on social media.
We developed a framework for detecting inaccurate phishing claims on Twitter.
We also proposed another framework for detecting misinformation related to Zoom's security and privacy threats on multiple platforms.
arXiv Detail & Related papers (2021-10-23T20:45:24Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z) - Preserving Integrity in Online Social Networks [13.347579281117628]
This paper surveys the state of the art in keeping online platforms and their users safe from such harm.
We highlight the techniques that have been proven useful in practice and that deserve additional attention from the academic community.
arXiv Detail & Related papers (2020-09-22T04:32:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.