Can LLMs Learn Macroeconomic Narratives from Social Media?
- URL: http://arxiv.org/abs/2406.12109v1
- Date: Mon, 17 Jun 2024 21:37:09 GMT
- Title: Can LLMs Learn Macroeconomic Narratives from Social Media?
- Authors: Almog Gueta, Amir Feder, Zorik Gekhman, Ariel Goldstein, Roi Reichart,
- Abstract summary: We introduce two curated datasets containing posts from X (formerly Twitter) which capture economy-related narratives.
We extract and summarize narratives from the tweets using Natural Language Processing (NLP) methods.
We test their predictive power for $textitmacroeconomic$ forecasting by incorporating the tweets' or the extracted narratives' representations in downstream financial prediction tasks.
- Score: 20.11321226491191
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study empirically tests the $\textit{Narrative Economics}$ hypothesis, which posits that narratives (ideas that are spread virally and affect public beliefs) can influence economic fluctuations. We introduce two curated datasets containing posts from X (formerly Twitter) which capture economy-related narratives (Data will be shared upon paper acceptance). Employing Natural Language Processing (NLP) methods, we extract and summarize narratives from the tweets. We test their predictive power for $\textit{macroeconomic}$ forecasting by incorporating the tweets' or the extracted narratives' representations in downstream financial prediction tasks. Our work highlights the challenges in improving macroeconomic models with narrative data, paving the way for the research community to realistically address this important challenge. From a scientific perspective, our investigation offers valuable insights and NLP tools for narrative extraction and summarization using Large Language Models (LLMs), contributing to future research on the role of narratives in economics.
Related papers
- EconNLI: Evaluating Large Language Models on Economics Reasoning [22.754757518792395]
Large Language Models (LLMs) are widely used for writing economic analysis reports or providing financial advice.
We propose a new dataset, natural language inference on economic events (EconNLI), to evaluate LLMs' knowledge and reasoning abilities in the economic domain.
Our experiments reveal that LLMs are not sophisticated in economic reasoning and may generate wrong or hallucinated answers.
arXiv Detail & Related papers (2024-07-01T11:58:24Z) - Effect of Leaders Voice on Financial Market: An Empirical Deep Learning Expedition on NASDAQ, NSE, and Beyond [1.6622844933418388]
Deep learning based models are proposed to predict the trend of financial market based on NLP analysis of the twitter handles of leaders of different fields.
The Indian and USA financial markets are explored in the present work where as other markets can be taken in future.
arXiv Detail & Related papers (2024-03-18T18:19:08Z) - Fair Abstractive Summarization of Diverse Perspectives [103.08300574459783]
A fair summary should provide a comprehensive coverage of diverse perspectives without underrepresenting certain groups.
We first formally define fairness in abstractive summarization as not underrepresenting perspectives of any groups of people.
We propose four reference-free automatic metrics by measuring the differences between target and source perspectives.
arXiv Detail & Related papers (2023-11-14T03:38:55Z) - Mastering the Task of Open Information Extraction with Large Language
Models and Consistent Reasoning Environment [52.592199835286394]
Open Information Extraction (OIE) aims to extract objective structured knowledge from natural texts.
Large language models (LLMs) have exhibited remarkable in-context learning capabilities.
arXiv Detail & Related papers (2023-10-16T17:11:42Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Context-faithful Prompting for Large Language Models [51.194410884263135]
Large language models (LLMs) encode parametric knowledge about world facts.
Their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks.
We assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention.
arXiv Detail & Related papers (2023-03-20T17:54:58Z) - A Survey on Event-based News Narrative Extraction [10.193264105560862]
This survey presents an extensive study of research in the area of event-based news narrative extraction.
We screened over 900 articles that yielded 54 relevant articles.
Based on the reviewed studies, we identify recent trends, open challenges, and potential research lines.
arXiv Detail & Related papers (2023-02-16T15:11:53Z) - Template-based Abstractive Microblog Opinion Summarisation [26.777997436856076]
We introduce the task of microblog opinion summarisation (MOS) and share a dataset of 3100 gold-standard opinion summaries.
The dataset contains summaries of tweets spanning a 2-year period and covers more topics than any other public Twitter summarisation dataset.
arXiv Detail & Related papers (2022-08-08T12:16:01Z) - Revisiting Rashomon: A Comment on "The Two Cultures" [95.81740983484471]
Breiman dubbed the "Rashomon Effect", describing the situation in which there are many models that satisfy predictive accuracy criteria equally well, but process information in substantially different ways.
This phenomenon can make it difficult to draw conclusions or automate decisions based on a model fit to data.
I make connections to recent work in the Machine Learning literature that explore the implications of this issue.
arXiv Detail & Related papers (2021-04-05T20:51:58Z) - Macroeconomic forecasting through news, emotions and narrative [12.762298148425796]
This study expands the existing body of research by incorporating a wide array of emotions from newspapers around the world into macroeconomic forecasts.
We model industrial production and consumer prices across a diverse range of economies using an autoregressive framework.
We find that emotions associated with happiness and anger have the strongest predictive power for the variables we predict.
arXiv Detail & Related papers (2020-09-23T10:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.