Using Grok to Avoid Personal Attacks While Correcting Misinformation on X
- URL: http://arxiv.org/abs/2601.04251v1
- Date: Tue, 06 Jan 2026 18:17:58 GMT
- Title: Using Grok to Avoid Personal Attacks While Correcting Misinformation on X
- Authors: Kevin Matthe Caramancion,
- Abstract summary: This study presents empirical evidence that invoking Grok, the native large language model on X, is associated with different social responses during misinformation correction.<n>Ad hominem attacks occurred in 72 percent of human-issued corrections and in none of the Grok-mediated corrections.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Correcting misinformation in public online spaces often exposes users to hostility and ad hominem attacks, discouraging participation in corrective discourse. This study presents empirical evidence that invoking Grok, the native large language model on X, rather than directly confronting other users, is associated with different social responses during misinformation correction. Using an observational design, 100 correction replies across five high-conflict misinformation topics were analyzed, with corrections balanced between Grok-mediated and direct human-issued responses. The primary outcome was whether a correction received at least one ad hominem attack within a 24-hour window. Ad hominem attacks occurred in 72 percent of human-issued corrections and in none of the Grok-mediated corrections. A chi-square test confirmed a statistically significant association with a large effect size. These findings suggest that AI-mediated correction may alter the social dynamics of public disagreement by reducing interpersonal hostility during misinformation responses.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Crowd Intelligence for Early Misinformation Prediction on Social Media [29.494819549803772]
We introduce CROWDSHIELD, a crowd intelligence-based method for early misinformation prediction.
We employ Q-learning to capture the two dimensions -- stances and claims.
We propose MIST, a manually annotated misinformation detection Twitter corpus.
arXiv Detail & Related papers (2024-08-08T13:45:23Z) - Missci: Reconstructing Fallacies in Misrepresented Science [84.32990746227385]
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers.
Missci is a novel argumentation theoretical model for fallacious reasoning.
We present Missci as a dataset to test the critical reasoning abilities of large language models.
arXiv Detail & Related papers (2024-06-05T12:11:10Z) - LLM generated responses to mitigate the impact of hate speech [1.774563970628096]
This paper outlines the design of our automatic moderation system and proposes a simple metric for measuring user engagement.
We discuss the ethical considerations and challenges in deploying generative AI for discourse moderation.
arXiv Detail & Related papers (2023-11-28T16:08:42Z) - Countering Misinformation via Emotional Response Generation [15.383062216223971]
proliferation of misinformation on social media platforms (SMPs) poses a significant danger to public health, social cohesion and democracy.
Previous research has shown how social correction can be an effective way to curb misinformation.
We present VerMouth, the first large-scale dataset comprising roughly 12 thousand claim-response pairs.
arXiv Detail & Related papers (2023-11-17T15:37:18Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable
Responses Created Through Human-Machine Collaboration [75.62448812759968]
This dataset is a large-scale Korean dataset of 49k sensitive questions with 42k acceptable and 46k non-acceptable responses.
The dataset was constructed leveraging HyperCLOVA in a human-in-the-loop manner based on real news headlines.
arXiv Detail & Related papers (2023-05-28T11:51:20Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z) - "Nice Try, Kiddo": Investigating Ad Hominems in Dialogue Responses [87.89632038677912]
Ad hominem attacks are those that target some feature of a person's character instead of the position the person is maintaining.
We propose categories of ad hominems, compose an annotated dataset, and build a system to analyze human and dialogue responses to English Twitter posts.
Our results indicate that 1) responses from both humans and DialoGPT contain more ad hominems for discussions around marginalized communities, 2) different quantities of ad hominems in the training data can influence the likelihood of generating ad hominems, and 3) we can constrained decoding techniques to reduce ad hominems
arXiv Detail & Related papers (2020-10-24T07:37:49Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - The COVID-19 Infodemic: Can the Crowd Judge Recent Misinformation
Objectively? [17.288917654501265]
We study whether crowdsourcing is an effective and reliable method to assess statements truthfulness during a pandemic.
We specifically target statements related to the COVID-19 health emergency, that is still ongoing at the time of the study.
In our experiment, crowd workers are asked to assess the truthfulness of statements, as well as to provide evidence for the assessments as a URL and a text justification.
arXiv Detail & Related papers (2020-08-13T05:53:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.