Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting
- URL: http://arxiv.org/abs/2310.13297v1
- Date: Fri, 20 Oct 2023 06:17:02 GMT
- Title: Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting
- Authors: Chenkai Sun, Jinning Li, Yi R. Fung, Hou Pong Chan, Tarek Abdelzaher,
ChengXiang Zhai, Heng Ji
- Abstract summary: SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
- Score: 74.68371461260946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic response forecasting for news media plays a crucial role in
enabling content producers to efficiently predict the impact of news releases
and prevent unexpected negative outcomes such as social conflict and moral
injury. To effectively forecast responses, it is essential to develop measures
that leverage the social dynamics and contextual information surrounding
individuals, especially in cases where explicit profiles or historical actions
of the users are limited (referred to as lurkers). As shown in a previous
study, 97% of all tweets are produced by only the most active 25% of users.
However, existing approaches have limited exploration of how to best process
and utilize these important features. To address this gap, we propose a novel
framework, named SocialSense, that leverages a large language model to induce a
belief-centered graph on top of an existent social network, along with
graph-based propagation to capture social dynamics. We hypothesize that the
induced graph that bridges the gap between distant users who share similar
beliefs allows the model to effectively capture the response patterns. Our
method surpasses existing state-of-the-art in experimental evaluations for both
zero-shot and supervised settings, demonstrating its effectiveness in response
forecasting. Moreover, the analysis reveals the framework's capability to
effectively handle unseen user and lurker scenarios, further highlighting its
robustness and practical applicability.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Graph Neural Networks for Antisocial Behavior Detection on Twitter [0.0]
Social media resurgence of antisocial behavior has exerted a downward spiral on stereotypical beliefs, and hateful comments towards individuals and social groups.
Advances in graph neural networks employed on massive quantities of graph-structured data raise high hopes for the future of mediating communication on social media platforms.
An approach based on graph convolutional data was employed to better capture the dependencies between the heterogeneous types of data.
arXiv Detail & Related papers (2023-12-28T00:25:12Z) - DySuse: Susceptibility Estimation in Dynamic Social Networks [2.736093604280113]
We propose a task, called susceptibility estimation in dynamic social networks, which is more realistic and valuable in real-world applications.
We leverage a structural feature module to independently capture the structural information of influence diffusion on each single graph snapshot.
Our framework is superior to the existing dynamic graph embedding models and has satisfactory prediction performance in multiple influence diffusion models.
arXiv Detail & Related papers (2023-08-21T03:28:34Z) - Measuring the Effect of Influential Messages on Varying Personas [67.1149173905004]
We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona might have upon seeing a news message.
The proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response.
This enables more accurate and comprehensive inference on the mental state of the persona.
arXiv Detail & Related papers (2023-05-25T21:01:00Z) - In the Eye of the Beholder: Robust Prediction with Causal User Modeling [27.294341513692164]
We propose a learning framework for relevance prediction that is robust to changes in the data distribution.
Our key observation is that robustness can be obtained by accounting for how users causally perceive the environment.
arXiv Detail & Related papers (2022-06-01T11:33:57Z) - Preference Enhanced Social Influence Modeling for Network-Aware Cascade
Prediction [59.221668173521884]
We propose a novel framework to promote cascade size prediction by enhancing the user preference modeling.
Our end-to-end method makes the user activating process of information diffusion more adaptive and accurate.
arXiv Detail & Related papers (2022-04-18T09:25:06Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.