Identifying Constructive Conflict in Online Discussions through Controversial yet Toxicity Resilient Posts
- URL: http://arxiv.org/abs/2509.18303v1
- Date: Mon, 22 Sep 2025 18:30:41 GMT
- Title: Identifying Constructive Conflict in Online Discussions through Controversial yet Toxicity Resilient Posts
- Authors: Ozgur Can Seckin, Bao Tran Truong, Alessandro Flammini, Filippo Menczer,
- Abstract summary: We operationalize controversiality to identify challenging dialogues and toxicity resilience to capture respectful conversations.<n>We also find that political posts are often controversial and tend to attract more toxic responses.<n>These findings suggest the potential for framing the tone of posts to encourage constructive political discussions.
- Score: 41.130462443875736
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bridging content that brings together individuals with opposing viewpoints on social media remains elusive, overshadowed by echo chambers and toxic exchanges. We propose that algorithmic curation could surface such content by considering constructive conflicts as a foundational criterion. We operationalize this criterion through controversiality to identify challenging dialogues and toxicity resilience to capture respectful conversations. We develop high-accuracy models to capture these dimensions. Analyses based on these models demonstrate that assessing resilience to toxic responses is not the same as identifying low-toxicity posts. We also find that political posts are often controversial and tend to attract more toxic responses. However, some posts, even the political ones, are resilient to toxicity despite being highly controversial, potentially sparking civil engagement. Toxicity resilient posts tend to use politeness cues, such as showing gratitude and hedging. These findings suggest the potential for framing the tone of posts to encourage constructive political discussions.
Related papers
- Predictively Combatting Toxicity in Health-related Online Discussions through Machine Learning [2.9748898344267785]
We propose the alternative of combatting user toxicity predictively, anticipating where a user could interact toxically in health-related online discussions.<n>Applying a Collaborative Filtering-based Machine Learning methodology, we predict the toxicity in COVID-related conversations between any user and subcommunity of Reddit, surpassing 80% predictive performance in relevant metrics.
arXiv Detail & Related papers (2025-05-19T11:53:37Z) - Toxicity Begets Toxicity: Unraveling Conversational Chains in Political Podcasts [5.573483199335299]
This work seeks to fill that gap by curating a dataset of political podcast transcripts and analyzing them with a focus on conversational structure.<n> Specifically, we investigate how toxicity surfaces and intensifies through sequences of replies within these dialogues, shedding light on the organic patterns by which harmful language can escalate across conversational turns.
arXiv Detail & Related papers (2025-01-22T04:58:50Z) - Toxic behavior silences online political conversations [0.0]
We investigate the hypothesis that individuals may refrain from expressing minority opinions publicly due to being exposed to toxic behavior.<n>Using hidden Markov models, we identify a latent state consistent with toxicity-driven silence.<n>Our findings offer insights into the intricacies of online political deliberation and emphasize the importance of considering self-censorship dynamics.
arXiv Detail & Related papers (2024-12-07T20:39:20Z) - Analyzing Toxicity in Deep Conversations: A Reddit Case Study [0.0]
This work employs a tree-based approach to understand how users behave concerning toxicity in public conversation settings.
We collect both the posts and the comment sections of the top 100 posts from 8 Reddit communities that allow profanity, totaling over 1 million responses.
We find that toxic comments increase the likelihood of subsequent toxic comments being produced in online conversations.
arXiv Detail & Related papers (2024-04-11T16:10:44Z) - Comprehensive Assessment of Toxicity in ChatGPT [49.71090497696024]
We evaluate the toxicity in ChatGPT by utilizing instruction-tuning datasets.
prompts in creative writing tasks can be 2x more likely to elicit toxic responses.
Certain deliberately toxic prompts, designed in earlier studies, no longer yield harmful responses.
arXiv Detail & Related papers (2023-11-03T14:37:53Z) - Twits, Toxic Tweets, and Tribal Tendencies: Trends in Politically Polarized Posts on Twitter [5.161088104035108]
We explore the role that partisanship and affective polarization play in contributing to toxicity on an individual level and a topic level on Twitter/X.
After collecting 89.6 million tweets from 43,151 Twitter/X users, we determine how several account-level characteristics, including partisanship, predict how often users post toxic content.
arXiv Detail & Related papers (2023-07-19T17:24:47Z) - Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers
and Hostile Intergroup Interactions on Reddit [66.09950457847242]
We study the activity of 5.97M Reddit users and 421M comments posted over 13 years.
We create a typology of relationships between political communities based on whether their users are toxic to each other.
arXiv Detail & Related papers (2022-11-25T22:17:07Z) - Beyond Plain Toxic: Detection of Inappropriate Statements on Flammable
Topics for the Russian Language [76.58220021791955]
We present two text collections labelled according to binary notion of inapropriateness and a multinomial notion of sensitive topic.
To objectivise the notion of inappropriateness, we define it in a data-driven way though crowdsourcing.
arXiv Detail & Related papers (2022-03-04T15:59:06Z) - Toxicity Detection can be Sensitive to the Conversational Context [64.28043776806213]
We construct and publicly release a dataset of 10,000 posts with two kinds of toxicity labels.
We introduce a new task, context sensitivity estimation, which aims to identify posts whose perceived toxicity changes if the context is also considered.
arXiv Detail & Related papers (2021-11-19T13:57:26Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.