Uncovering Latent Arguments in Social Media Messaging by Employing LLMs-in-the-Loop Strategy
- URL: http://arxiv.org/abs/2404.10259v3
- Date: Thu, 22 Aug 2024 15:52:13 GMT
- Title: Uncovering Latent Arguments in Social Media Messaging by Employing LLMs-in-the-Loop Strategy
- Authors: Tunazzina Islam, Dan Goldwasser,
- Abstract summary: Social media has led to a surge in popularity for automated methods of analyzing public opinion.
Traditional unsupervised methods for extracting themes from public discourse, such as topic modeling, often reveal overarching patterns that might not capture specific nuances.
We propose a generic LLMs-in-the-Loop strategy that leverages the advanced capabilities of Large Language Models.
- Score: 22.976609127865732
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The widespread use of social media has led to a surge in popularity for automated methods of analyzing public opinion. Supervised methods are adept at text categorization, yet the dynamic nature of social media discussions poses a continual challenge for these techniques due to the constant shifting of the focus. On the other hand, traditional unsupervised methods for extracting themes from public discourse, such as topic modeling, often reveal overarching patterns that might not capture specific nuances. Consequently, a significant portion of research into social media discourse still depends on labor-intensive manual coding techniques and a human-in-the-loop approach, which are both time-consuming and costly. In this work, we study the problem of discovering arguments associated with a specific theme. We propose a generic LLMs-in-the-Loop strategy that leverages the advanced capabilities of Large Language Models (LLMs) to extract latent arguments from social media messaging. To demonstrate our approach, we apply our framework to contentious topics. We use two publicly available datasets: (1) the climate campaigns dataset of 14k Facebook ads with 25 themes and (2) the COVID-19 vaccine campaigns dataset of 9k Facebook ads with 14 themes. Additionally, we design a downstream task as stance prediction by leveraging talking points in climate debates. Furthermore, we analyze demographic targeting and the adaptation of messaging based on real-world events.
Related papers
- Towards Scalable Topic Detection on Web via Simulating Levy Walks Nature of Topics in Similarity Space [55.97416108140739]
We present a novel, yet very powerful Explore-Exploit (EE) approach to group topics by simulating Levy walks nature in the similarity space.
Experiments on two public data sets demonstrate that our approach is not only comparable to the state-of-the-art methods in terms of effectiveness but also significantly outperforms the state-of-the-art methods in terms of efficiency.
arXiv Detail & Related papers (2024-07-26T07:19:46Z) - Discovering Latent Themes in Social Media Messaging: A Machine-in-the-Loop Approach Integrating LLMs [22.976609127865732]
We introduce a novel approach to uncovering latent themes in social media messaging.
Our work sheds light on the dynamic nature of social media, revealing the shifts in the thematic focus of messaging in response to real-world events.
arXiv Detail & Related papers (2024-03-15T21:54:00Z) - Social Convos: Capturing Agendas and Emotions on Social Media [1.6385815610837167]
We present a novel approach to extract influence indicators from messages circulating among groups of users discussing particular topics.
We focus on two influence indicators: the (control of) agenda and the use of emotional language.
arXiv Detail & Related papers (2024-02-23T19:14:09Z) - SoMeLVLM: A Large Vision Language Model for Social Media Processing [78.47310657638567]
We introduce a Large Vision Language Model for Social Media Processing (SoMeLVLM)
SoMeLVLM is a cognitive framework equipped with five key capabilities including knowledge & comprehension, application, analysis, evaluation, and creation.
Our experiments demonstrate that SoMeLVLM achieves state-of-the-art performance in multiple social media tasks.
arXiv Detail & Related papers (2024-02-20T14:02:45Z) - Modeling Political Orientation of Social Media Posts: An Extended
Analysis [0.0]
Developing machine learning models to characterize political polarization on online social media presents significant challenges.
These challenges mainly stem from various factors such as the lack of annotated data, presence of noise in social media datasets, and the sheer volume of data.
We introduce two methods that leverage on news media bias and post content to label social media posts.
We demonstrate that current machine learning models can exhibit improved performance in predicting political orientation of social media posts.
arXiv Detail & Related papers (2023-11-21T03:34:20Z) - Leveraging Large Language Models to Detect Influence Campaigns in Social
Media [9.58546889761175]
Social media influence campaigns pose significant challenges to public discourse and democracy.
Traditional detection methods fall short due to the complexity and dynamic nature of social media.
We propose a novel detection method using Large Language Models (LLMs) that incorporates both user metadata and network structures.
arXiv Detail & Related papers (2023-11-14T00:25:09Z) - Cross-Platform Social Dynamics: An Analysis of ChatGPT and COVID-19
Vaccine Conversations [37.69303106863453]
We analyzed over 12 million posts and news articles related to two significant events: the release of ChatGPT in 2022 and the global discussions about COVID-19 vaccines in 2021.
Data was collected from multiple platforms, including Twitter, Facebook, Instagram, Reddit, YouTube, and GDELT.
We employed topic modeling techniques to uncover the distinct thematic emphases on each platform, which reflect their specific features and target audiences.
arXiv Detail & Related papers (2023-10-17T09:58:55Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Persuasion Strategies in Advertisements [68.70313043201882]
We introduce an extensive vocabulary of persuasion strategies and build the first ad image corpus annotated with persuasion strategies.
We then formulate the task of persuasion strategy prediction with multi-modal learning.
We conduct a real-world case study on 1600 advertising campaigns of 30 Fortune-500 companies.
arXiv Detail & Related papers (2022-08-20T07:33:13Z) - Author Clustering and Topic Estimation for Short Texts [69.54017251622211]
We propose a novel model that expands on the Latent Dirichlet Allocation by modeling strong dependence among the words in the same document.
We also simultaneously cluster users, removing the need for post-hoc cluster estimation.
Our method performs as well as -- or better -- than traditional approaches to problems arising in short text.
arXiv Detail & Related papers (2021-06-15T20:55:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.