Are Large Language Models (LLMs) Good Social Predictors?
- URL: http://arxiv.org/abs/2402.12620v1
- Date: Tue, 20 Feb 2024 00:59:22 GMT
- Title: Are Large Language Models (LLMs) Good Social Predictors?
- Authors: Kaiqi Yang, Hang Li, Hongzhi Wen, Tai-Quan Peng, Jiliang Tang, Hui Liu
- Abstract summary: We show that Large Language Models (LLMs) cannot work as expected on social prediction when given general input features without shortcuts.
We introduce a novel social prediction task, Soc-PRF Prediction, which utilizes general features as input and simulates real-world social study settings.
- Score: 36.68104332805214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The prediction has served as a crucial scientific method in modern social
studies. With the recent advancement of Large Language Models (LLMs), efforts
have been made to leverage LLMs to predict the human features in social life,
such as presidential voting. These works suggest that LLMs are capable of
generating human-like responses. However, we find that the promising
performance achieved by previous studies is because of the existence of input
shortcut features to the response. In fact, by removing these shortcuts, the
performance is reduced dramatically. To further revisit the ability of LLMs, we
introduce a novel social prediction task, Soc-PRF Prediction, which utilizes
general features as input and simulates real-world social study settings. With
the comprehensive investigations on various LLMs, we reveal that LLMs cannot
work as expected on social prediction when given general input features without
shortcuts. We further investigate possible reasons for this phenomenon that
suggest potential ways to enhance LLMs for social prediction.
Related papers
- Causality for Large Language Models [37.10970529459278]
Large language models (LLMs) with billions or trillions of parameters are trained on vast datasets, achieving unprecedented success across a series of language tasks.
Recent research highlights that LLMs function as causal parrots, capable of reciting causal knowledge without truly understanding or applying it.
This survey aims to explore how causality can enhance LLMs at every stage of their lifecycle.
arXiv Detail & Related papers (2024-10-20T07:22:23Z) - LLMs are Not Just Next Token Predictors [0.0]
LLMs are statistical models of language learning through gradient descent with a next token prediction objective.
While LLMs are engineered using next token prediction, and trained based on their success at this task, our view is that a reduction to just next token predictor sells LLMs short.
In order to draw this out, we will make an analogy with a once prominent research program in biology explaining evolution and development from the gene's eye view.
arXiv Detail & Related papers (2024-08-06T16:36:28Z) - Exploring Prosocial Irrationality for LLM Agents: A Social Cognition View [21.341128731357415]
Large language models (LLMs) have been shown to face hallucination issues due to the data they trained on often containing human bias.
We propose CogMir, an open-ended Multi-LLM Agents framework that utilizes hallucination properties to assess and enhance LLM Agents' social intelligence.
arXiv Detail & Related papers (2024-05-23T16:13:33Z) - Understanding Privacy Risks of Embeddings Induced by Large Language Models [75.96257812857554]
Large language models show early signs of artificial general intelligence but struggle with hallucinations.
One promising solution is to store external knowledge as embeddings, aiding LLMs in retrieval-augmented generation.
Recent studies experimentally showed that the original text can be partially reconstructed from text embeddings by pre-trained language models.
arXiv Detail & Related papers (2024-04-25T13:10:48Z) - Large Language Models: A Survey [69.72787936480394]
Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks.
LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data.
arXiv Detail & Related papers (2024-02-09T05:37:09Z) - Learning to Generate Explainable Stock Predictions using Self-Reflective
Large Language Models [54.21695754082441]
We propose a framework to teach Large Language Models (LLMs) to generate explainable stock predictions.
A reflective agent learns how to explain past stock movements through self-reasoning, while the PPO trainer trains the model to generate the most likely explanations.
Our framework can outperform both traditional deep-learning and LLM methods in prediction accuracy and Matthews correlation coefficient.
arXiv Detail & Related papers (2024-02-06T03:18:58Z) - Rethinking Interpretability in the Era of Large Language Models [76.1947554386879]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide array of tasks.
The capability to explain in natural language allows LLMs to expand the scale and complexity of patterns that can be given to a human.
These new capabilities raise new challenges, such as hallucinated explanations and immense computational costs.
arXiv Detail & Related papers (2024-01-30T17:38:54Z) - Psychometric Predictive Power of Large Language Models [32.31556074470733]
We find that instruction tuning does not always make large language models human-like from a cognitive modeling perspective.
Next-word probabilities estimated by instruction-tuned LLMs are often worse at simulating human reading behavior than those estimated by base LLMs.
arXiv Detail & Related papers (2023-11-13T17:19:14Z) - Do LLMs exhibit human-like response biases? A case study in survey
design [66.1850490474361]
We investigate the extent to which large language models (LLMs) reflect human response biases, if at all.
We design a dataset and framework to evaluate whether LLMs exhibit human-like response biases in survey questionnaires.
Our comprehensive evaluation of nine models shows that popular open and commercial LLMs generally fail to reflect human-like behavior.
arXiv Detail & Related papers (2023-11-07T15:40:43Z) - Potential Benefits of Employing Large Language Models in Research in
Moral Education and Development [0.0]
Recently, computer scientists have developed large language models (LLMs) by training prediction models with large-scale language corpora and human reinforcements.
I will examine how LLMs might contribute to moral education and development research.
arXiv Detail & Related papers (2023-06-23T22:39:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.