Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go
- URL: http://arxiv.org/abs/2103.00153v2
- Date: Tue, 6 Jun 2023 16:22:16 GMT
- Title: Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go
- Authors: Arnav Arora, Preslav Nakov, Momchil Hardalov, Sheikh Muhammad Sarwar,
Vibha Nayak, Yoan Dinkov, Dimitrina Zlatkova, Kyle Dent, Ameya Bhatawdekar,
Guillaume Bouchard, Isabelle Augenstein
- Abstract summary: harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
- Score: 44.774035806004214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of harmful content on online platforms is a major societal
problem, which comes in many different forms including hate speech, offensive
language, bullying and harassment, misinformation, spam, violence, graphic
content, sexual abuse, self harm, and many other. Online platforms seek to
moderate such content to limit societal harm, to comply with legislation, and
to create a more inclusive environment for their users. Researchers have
developed different methods for automatically detecting harmful content, often
focusing on specific sub-problems or on narrow communities, as what is
considered harmful often depends on the platform and on the context. We argue
that there is currently a dichotomy between what types of harmful content
online platforms seek to curb, and what research efforts there are to
automatically detect such content. We thus survey existing methods as well as
content moderation policies by online platforms in this light and we suggest
directions for future work.
Related papers
- On the Use of Proxies in Political Ad Targeting [49.61009579554272]
We show that major political advertisers circumvented mitigations by targeting proxy attributes.
Our findings have crucial implications for the ongoing discussion on the regulation of political advertising.
arXiv Detail & Related papers (2024-10-18T17:15:13Z) - Content Moderation on Social Media in the EU: Insights From the DSA
Transparency Database [0.0]
Digital Services Act (DSA) requires large social media platforms in the EU to provide clear and specific information whenever they restrict access to certain content.
Statements of Reasons (SoRs) are collected in the DSA Transparency Database to ensure transparency and scrutiny of content moderation decisions.
We empirically analyze 156 million SoRs within an observation period of two months to provide an early look at content moderation decisions of social media platforms in the EU.
arXiv Detail & Related papers (2023-12-07T16:56:19Z) - User Attitudes to Content Moderation in Web Search [49.1574468325115]
We examine the levels of support for different moderation practices applied to potentially misleading and/or potentially offensive content in web search.
We find that the most supported practice is informing users about potentially misleading or offensive content, and the least supported one is the complete removal of search results.
More conservative users and users with lower levels of trust in web search results are more likely to be against content moderation in web search.
arXiv Detail & Related papers (2023-10-05T10:57:15Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - SoK: Content Moderation in Social Media, from Guidelines to Enforcement,
and Research to Practice [9.356143195807064]
We study the 14 most popular social media content moderation guidelines and practices in the US.
We identify the differences between the content moderation employed in mainstream social media platforms compared to fringe platforms.
We highlight why platforms should shift from a one-size-fits-all model to a more inclusive model.
arXiv Detail & Related papers (2022-06-29T18:48:04Z) - TeamX@DravidianLangTech-ACL2022: A Comparative Analysis for Troll-Based
Meme Classification [21.32190107220764]
harmful content online raised concerns among social media platforms, government agencies, policymakers, and society as a whole.
Among different harmful content textittrolling-based online content is one of them, where the idea is to post a message that is provocative, offensive, or menacing with an intent to mislead the audience.
This study provides a comparative analysis of troll-based memes classification using the textual, visual, and multimodal content.
arXiv Detail & Related papers (2022-05-09T16:19:28Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - When Curation Becomes Creation: Algorithms, Microcontent, and the
Vanishing Distinction between Platforms and Creators [30.71023908707896]
We argue that any coherent regulatory framework must adapt to the modern social media landscape.
We argue that any coherent regulatory framework must adapt to this reality.
arXiv Detail & Related papers (2021-07-01T13:37:05Z) - Preserving Integrity in Online Social Networks [13.347579281117628]
This paper surveys the state of the art in keeping online platforms and their users safe from such harm.
We highlight the techniques that have been proven useful in practice and that deserve additional attention from the academic community.
arXiv Detail & Related papers (2020-09-22T04:32:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.