TAIGR: Towards Modeling Influencer Content on Social Media via Structured, Pragmatic Inference
- URL: http://arxiv.org/abs/2601.20032v1
- Date: Tue, 27 Jan 2026 20:12:57 GMT
- Title: TAIGR: Towards Modeling Influencer Content on Social Media via Structured, Pragmatic Inference
- Authors: Nishanth Sridhar Nakshatri, Eylon Caplan, Rajkumar Pujari, Dan Goldwasser,
- Abstract summary: Claim-centric verification methods struggle to capture the pragmatic meaning of influencer discourse.<n>We propose a structured framework designed to analyze influencer discourse, which operates in three stages.<n>We show that accurate validation requires modeling the discourse's pragmatic and argumentative structure.
- Score: 19.35061674485291
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Health influencers play a growing role in shaping public beliefs, yet their content is often conveyed through conversational narratives and rhetorical strategies rather than explicit factual claims. As a result, claim-centric verification methods struggle to capture the pragmatic meaning of influencer discourse. In this paper, we propose TAIGR (Takeaway Argumentation Inference with Grounded References), a structured framework designed to analyze influencer discourse, which operates in three stages: (1) identifying the core influencer recommendation--takeaway; (2) constructing an argumentation graph that captures influencer justification for the takeaway; (3) performing factor graph-based probabilistic inference to validate the takeaway. We evaluate TAIGR on a content validation task over influencer video transcripts on health, showing that accurate validation requires modeling the discourse's pragmatic and argumentative structure rather than treating transcripts as flat collections of claims.
Related papers
- Detecting Winning Arguments with Large Language Models and Persuasion Strategies [7.089321248525487]
This work investigates the role of persuasion strategies in determining the persuasiveness of a text.<n>We conduct experiments on three annotated argument datasets: Winning Arguments (built from the Change My View subreddit), Anthropic/Persuasion, and Persuasion for Good.<n>Results show that strategy-guided reasoning improves the prediction of persuasiveness.
arXiv Detail & Related papers (2026-01-15T18:30:15Z) - MMPersuade: A Dataset and Evaluation Framework for Multimodal Persuasion [73.99171322670772]
Large Vision-Language Models (LVLMs) are increasingly deployed in domains such as shopping, health, and news.<n> MMPersuade provides a unified framework for systematically studying multimodal persuasion dynamics in LVLMs.
arXiv Detail & Related papers (2025-10-26T17:39:21Z) - Joint Effects of Argumentation Theory, Audio Modality and Data Enrichment on LLM-Based Fallacy Classification [0.038233569758620044]
This study investigates how context and emotional tone metadata influence large language model (LLM) reasoning and performance in fallacy classification tasks.<n>Using data from U.S. presidential debates, we classify six fallacy types through various prompting strategies applied to the Qwen-3 (8B) model.
arXiv Detail & Related papers (2025-09-14T06:35:34Z) - Context Does Matter: Implications for Crowdsourced Evaluation Labels in Task-Oriented Dialogue Systems [57.16442740983528]
Crowdsourced labels play a crucial role in evaluating task-oriented dialogue systems.
Previous studies suggest using only a portion of the dialogue context in the annotation process.
This study investigates the influence of dialogue context on annotation quality.
arXiv Detail & Related papers (2024-04-15T17:56:39Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - An Item Response Theory Framework for Persuasion [3.0938904602244346]
We apply Item Response Theory, popular in education and political science research, to the analysis of argument persuasiveness in language.
We empirically evaluate the model's performance on three datasets, including a novel dataset in the area of political advocacy.
arXiv Detail & Related papers (2022-04-24T19:14:11Z) - Persua: A Visual Interactive System to Enhance the Persuasiveness of
Arguments in Online Discussion [52.49981085431061]
Enhancing people's ability to write persuasive arguments could contribute to the effectiveness and civility in online communication.
We derived four design goals for a tool that helps users improve the persuasiveness of arguments in online discussions.
Persua is an interactive visual system that provides example-based guidance on persuasive strategies to enhance the persuasiveness of arguments.
arXiv Detail & Related papers (2022-04-16T08:07:53Z) - Exploring Discourse Structures for Argument Impact Classification [48.909640432326654]
This paper empirically shows that the discourse relations between two arguments along the context path are essential factors for identifying the persuasive power of an argument.
We propose DisCOC to inject and fuse the sentence-level structural information with contextualized features derived from large-scale language models.
arXiv Detail & Related papers (2021-06-02T06:49:19Z) - Examining the Ordering of Rhetorical Strategies in Persuasive Requests [58.63432866432461]
We use a Variational Autoencoder model to disentangle content and rhetorical strategies in textual requests from a large-scale loan request corpus.
We find that specific (orderings of) strategies interact uniquely with a request's content to impact success rate, and thus the persuasiveness of a request.
arXiv Detail & Related papers (2020-10-09T15:10:44Z) - Influence via Ethos: On the Persuasive Power of Reputation in
Deliberation Online [10.652828373995513]
Deliberation among individuals online plays a key role in shaping the opinions that drive votes, purchases, donations and other critical offline behavior.
Our research examines the persuasive power of $textitethos$ -- an individual's "reputation"
We find that an individual's reputation significantly impacts their persuasion rate above and beyond the validity, strength and presentation of their arguments.
arXiv Detail & Related papers (2020-06-01T04:25:40Z) - The Role of Pragmatic and Discourse Context in Determining Argument
Impact [39.70446357000737]
This paper presents a new dataset to initiate the study of this aspect of argumentation.
It consists of a diverse collection of arguments covering 741 controversial topics and comprising over 47,000 claims.
We propose predictive models that incorporate the pragmatic and discourse context of argumentative claims and show that they outperform models that rely on claim-specific linguistic features for predicting the perceived impact of individual claims within a particular line of argument.
arXiv Detail & Related papers (2020-04-06T23:00:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.