Thread With Caution: Proactively Helping Users Assess and Deescalate
Tension in Their Online Discussions
- URL: http://arxiv.org/abs/2212.01401v1
- Date: Fri, 2 Dec 2022 19:00:03 GMT
- Title: Thread With Caution: Proactively Helping Users Assess and Deescalate
Tension in Their Online Discussions
- Authors: Jonathan P. Chang, Charlotte Schluger, Cristian
Danescu-Niculescu-Mizil
- Abstract summary: Incivility remains a major challenge for online discussion platforms.
Traditionally, platforms have relied on moderators to -- with or without algorithmic assistance -- take corrective actions such as removing comments or banning users.
We propose a complementary paradigm that directly empowers users by proactively enhancing their awareness about existing tension in the conversation they are engaging in.
- Score: 13.455968033357065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incivility remains a major challenge for online discussion platforms, to such
an extent that even conversations between well-intentioned users can often
derail into uncivil behavior. Traditionally, platforms have relied on
moderators to -- with or without algorithmic assistance -- take corrective
actions such as removing comments or banning users. In this work we propose a
complementary paradigm that directly empowers users by proactively enhancing
their awareness about existing tension in the conversation they are engaging in
and actively guides them as they are drafting their replies to avoid further
escalation.
As a proof of concept for this paradigm, we design an algorithmic tool that
provides such proactive information directly to users, and conduct a user study
in a popular discussion platform. Through a mixed methods approach combining
surveys with a randomized controlled experiment, we uncover qualitative and
quantitative insights regarding how the participants utilize and react to this
information. Most participants report finding this proactive paradigm valuable,
noting that it helps them to identify tension that they may have otherwise
missed and prompts them to further reflect on their own replies and to revise
them. These effects are corroborated by a comparison of how the participants
draft their reply when our tool warns them that their conversation is at risk
of derailing into uncivil behavior versus in a control condition where the tool
is disabled. These preliminary findings highlight the potential of this
user-centered paradigm and point to concrete directions for future
implementations.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - User Strategization and Trustworthy Algorithms [81.82279667028423]
We show that user strategization can actually help platforms in the short term.
We then show that it corrupts platforms' data and ultimately hurts their ability to make counterfactual decisions.
arXiv Detail & Related papers (2023-12-29T16:09:42Z) - Fostering User Engagement in the Critical Reflection of Arguments [3.26297440422721]
We propose a system that engages in a deliberative dialogue with a human.
We enable the system to intervene if the user is too focused on their pre-existing opinion.
We report on a user study with 58 participants to test our model and the effect of the intervention mechanism.
arXiv Detail & Related papers (2023-08-17T15:48:23Z) - Silence Speaks Volumes: Re-weighting Techniques for Under-Represented
Users in Fake News Detection [25.5495085102178]
A mere 1% of users generate the majority of the content on social networking sites.
The remaining users, though engaged to varying degrees, tend to be less active in content creation and largely silent.
We propose to leverage re-weighting techniques to make the silent majority heard, and in turn, investigate whether the cues from these users can improve the performance of the current models for the downstream task of fake news detection.
arXiv Detail & Related papers (2023-08-03T20:04:20Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - User-Centered Security in Natural Language Processing [0.7106986689736825]
dissertation proposes a framework of user-centered security in Natural Language Processing (NLP)
It focuses on two security domains within NLP with great public interest.
arXiv Detail & Related papers (2023-01-10T22:34:19Z) - Proactive Moderation of Online Discussions: Existing Practices and the
Potential for Algorithmic Support [12.515485963557426]
reactive paradigm of taking action against already-posted antisocial content is currently the most common form of moderation.
We explore how automation could assist with this existing proactive moderation workflow by building a prototype tool.
arXiv Detail & Related papers (2022-11-29T19:00:02Z) - Interacting with Non-Cooperative User: A New Paradigm for Proactive
Dialogue Policy [83.61404191470126]
We propose a new solution named I-Pro that can learn Proactive policy in the Interactive setting.
Specifically, we learn the trade-off via a learned goal weight, which consists of four factors.
The experimental results demonstrate I-Pro significantly outperforms baselines in terms of effectiveness and interpretability.
arXiv Detail & Related papers (2022-04-07T14:11:31Z) - Linking the Dynamics of User Stance to the Structure of Online
Discussions [6.853826783413853]
We investigate whether users' stance concerning contentious subjects is influenced by the online discussions they are exposed to.
We set up a series of predictive exercises based on machine learning models.
We find that the most informative features relate to the stance composition of the discussion in which users prefer to engage.
arXiv Detail & Related papers (2021-01-25T02:08:54Z) - Advances and Challenges in Conversational Recommender Systems: A Survey [133.93908165922804]
We provide a systematic review of the techniques used in current conversational recommender systems (CRSs)
We summarize the key challenges of developing CRSs into five directions.
These research directions involve multiple research fields like information retrieval (IR), natural language processing (NLP), and human-computer interaction (HCI)
arXiv Detail & Related papers (2021-01-23T08:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.