Proactive Moderation of Online Discussions: Existing Practices and the
Potential for Algorithmic Support
- URL: http://arxiv.org/abs/2211.16525v1
- Date: Tue, 29 Nov 2022 19:00:02 GMT
- Title: Proactive Moderation of Online Discussions: Existing Practices and the
Potential for Algorithmic Support
- Authors: Charlotte Schluger, Jonathan P. Chang, Cristian
Danescu-Niculescu-Mizil, Karen Levy
- Abstract summary: reactive paradigm of taking action against already-posted antisocial content is currently the most common form of moderation.
We explore how automation could assist with this existing proactive moderation workflow by building a prototype tool.
- Score: 12.515485963557426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To address the widespread problem of uncivil behavior, many online discussion
platforms employ human moderators to take action against objectionable content,
such as removing it or placing sanctions on its authors. This reactive paradigm
of taking action against already-posted antisocial content is currently the
most common form of moderation, and has accordingly underpinned many recent
efforts at introducing automation into the moderation process. Comparatively
less work has been done to understand other moderation paradigms -- such as
proactively discouraging the emergence of antisocial behavior rather than
reacting to it -- and the role algorithmic support can play in these paradigms.
In this work, we investigate such a proactive framework for moderation in a
case study of a collaborative setting: Wikipedia Talk Pages. We employ a mixed
methods approach, combining qualitative and design components for a holistic
analysis. Through interviews with moderators, we find that despite a lack of
technical and social support, moderators already engage in a number of
proactive moderation behaviors, such as preemptively intervening in
conversations to keep them on track. Further, we explore how automation could
assist with this existing proactive moderation workflow by building a prototype
tool, presenting it to moderators, and examining how the assistance it provides
might fit into their workflow. The resulting feedback uncovers both strengths
and drawbacks of the prototype tool and suggests concrete steps towards further
developing such assisting technology so it can most effectively support
moderators in their existing proactive moderation workflow.
Related papers
- Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - Sim-to-Real Causal Transfer: A Metric Learning Approach to
Causally-Aware Interaction Representations [62.48505112245388]
We take an in-depth look at the causal awareness of modern representations of agent interactions.
We show that recent representations are already partially resilient to perturbations of non-causal agents.
We propose a metric learning approach that regularizes latent representations with causal annotations.
arXiv Detail & Related papers (2023-12-07T18:57:03Z) - Combatting Human Trafficking in the Cyberspace: A Natural Language
Processing-Based Methodology to Analyze the Language in Online Advertisements [55.2480439325792]
This project tackles the pressing issue of human trafficking in online C2C marketplaces through advanced Natural Language Processing (NLP) techniques.
We introduce a novel methodology for generating pseudo-labeled datasets with minimal supervision, serving as a rich resource for training state-of-the-art NLP models.
A key contribution is the implementation of an interpretability framework using Integrated Gradients, providing explainable insights crucial for law enforcement.
arXiv Detail & Related papers (2023-11-22T02:45:01Z) - Can Language Model Moderators Improve the Health of Online Discourse? [26.191337231826246]
We establish a systematic definition of conversational moderation effectiveness grounded on moderation literature.
We propose a comprehensive evaluation framework to assess models' moderation capabilities independently of human intervention.
arXiv Detail & Related papers (2023-11-16T11:14:22Z) - Toxicity Detection is NOT all you Need: Measuring the Gaps to Supporting Volunteer Content Moderators [19.401873797111662]
We conduct a model review on Hugging Face to reveal the availability of models to cover various moderation rules and guidelines.
We put state-of-the-art LLMs to the test, evaluating how well these models perform in flagging violations of platform rules from one particular forum.
Overall, we observe a non-trivial gap, as missing developed models and LLMs exhibit moderate to low performance on a significant portion of the rules.
arXiv Detail & Related papers (2023-11-14T03:18:28Z) - Nip it in the Bud: Moderation Strategies in Open Source Software
Projects and the Role of Bots [17.02726827353919]
This study examines the various structures and norms that support community moderation in open source software projects.
We interviewed 14 practitioners to uncover existing moderation practices and ways that automation can provide assistance.
Our main contributions include a characterization of moderated content in OSS projects, moderation techniques, as well as perceptions of and recommendations for improving the automation of moderation tasks.
arXiv Detail & Related papers (2023-08-14T19:42:51Z) - Boosting Distress Support Dialogue Responses with Motivational
Interviewing Strategy [4.264192013842096]
We show how some response types could be rephrased into a more MI adherent form.
We build several rephrasers by fine-tuning Blender and GPT3 to rephrase MI non-adherent "Advise without permission" responses into "Advise with permission"
arXiv Detail & Related papers (2023-05-17T13:18:28Z) - Thread With Caution: Proactively Helping Users Assess and Deescalate
Tension in Their Online Discussions [13.455968033357065]
Incivility remains a major challenge for online discussion platforms.
Traditionally, platforms have relied on moderators to -- with or without algorithmic assistance -- take corrective actions such as removing comments or banning users.
We propose a complementary paradigm that directly empowers users by proactively enhancing their awareness about existing tension in the conversation they are engaging in.
arXiv Detail & Related papers (2022-12-02T19:00:03Z) - Interacting with Non-Cooperative User: A New Paradigm for Proactive
Dialogue Policy [83.61404191470126]
We propose a new solution named I-Pro that can learn Proactive policy in the Interactive setting.
Specifically, we learn the trade-off via a learned goal weight, which consists of four factors.
The experimental results demonstrate I-Pro significantly outperforms baselines in terms of effectiveness and interpretability.
arXiv Detail & Related papers (2022-04-07T14:11:31Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - On the Social and Technical Challenges of Web Search Autosuggestion
Moderation [118.47867428272878]
Autosuggestions are typically generated by machine learning (ML) systems trained on a corpus of search logs and document representations.
While current search engines have become increasingly proficient at suppressing such problematic suggestions, there are still persistent issues that remain.
We discuss several dimensions of problematic suggestions, difficult issues along the pipeline, and why our discussion applies to the increasing number of applications beyond web search.
arXiv Detail & Related papers (2020-07-09T19:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.