Can Large Language Models Predict Associations Among Human Attitudes?
- URL: http://arxiv.org/abs/2503.21011v1
- Date: Wed, 26 Mar 2025 21:58:43 GMT
- Title: Can Large Language Models Predict Associations Among Human Attitudes?
- Authors: Ana Ma, Derek Powell,
- Abstract summary: We show that large language models (LLMs) can predict human attitudes based on other attitudes.<n>Using a novel dataset of human responses toward diverse attitude statements, we found that a frontier language model (GPT-4o) was able to recreate the pairwise correlations among individual attitudes.<n>In an advance over prior work, we tested GPT-4o's ability to predict in the absence of surface-similarity between attitudes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prior work has shown that large language models (LLMs) can predict human attitudes based on other attitudes, but this work has largely focused on predictions from highly similar and interrelated attitudes. In contrast, human attitudes are often strongly associated even across disparate and dissimilar topics. Using a novel dataset of human responses toward diverse attitude statements, we found that a frontier language model (GPT-4o) was able to recreate the pairwise correlations among individual attitudes and to predict individuals' attitudes from one another. Crucially, in an advance over prior work, we tested GPT-4o's ability to predict in the absence of surface-similarity between attitudes, finding that while surface similarity improves prediction accuracy, the model was still highly-capable of generating meaningful social inferences between dissimilar attitudes. Altogether, our findings indicate that LLMs capture crucial aspects of the deeper, latent structure of human belief systems.
Related papers
- Multi-turn Evaluation of Anthropomorphic Behaviours in Large Language Models [26.333097337393685]
The tendency of users to anthropomorphise large language models (LLMs) is of growing interest to AI developers, researchers, and policy-makers.<n>Here, we present a novel method for empirically evaluating anthropomorphic LLM behaviours in realistic and varied settings.<n>First, we develop a multi-turn evaluation of 14 anthropomorphic behaviours.<n>Second, we present a scalable, automated approach by employing simulations of user interactions.<n>Third, we conduct an interactive, large-scale human subject study (N=1101) to validate that the model behaviours we measure predict real users' anthropomorphic perceptions.
arXiv Detail & Related papers (2025-02-10T22:09:57Z) - Judgment of Learning: A Human Ability Beyond Generative Artificial Intelligence [0.0]
Large language models (LLMs) increasingly mimic human cognition in various language-based tasks.<n>We introduce a cross-agent prediction model to assess whether ChatGPT-based LLMs align with human judgments of learning (JOL)<n>Our results revealed that while human JOL reliably predicted actual memory performance, none of the tested LLMs demonstrated comparable predictive accuracy.
arXiv Detail & Related papers (2024-10-17T09:42:30Z) - The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Pre-trained Language Models [78.69526166193236]
Pre-trained Language models (PLMs) have been acknowledged to contain harmful information, such as social biases.
We propose sc Social Bias Neurons to accurately pinpoint units (i.e., neurons) in a language model that can be attributed to undesirable behavior, such as social bias.
As measured by prior metrics from StereoSet, our model achieves a higher degree of fairness while maintaining language modeling ability with low cost.
arXiv Detail & Related papers (2024-06-14T15:41:06Z) - Large Language Models Can Infer Personality from Free-Form User Interactions [0.0]
GPT-4 can infer personality with moderate accuracy, outperforming previous approaches.
Results show that the direct focus on personality assessment did not result in a less positive user experience.
Preliminary analyses suggest that the accuracy of personality inferences varies only marginally across different socio-demographic subgroups.
arXiv Detail & Related papers (2024-05-19T20:33:36Z) - Large Language Models for Psycholinguistic Plausibility Pretesting [47.1250032409564]
We investigate whether Language Models (LMs) can be used to generate plausibility judgements.
We find that GPT-4 plausibility judgements highly correlate with human judgements across the structures we examine.
We then test whether this correlation implies that LMs can be used instead of humans for pretesting.
arXiv Detail & Related papers (2024-02-08T07:20:02Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Humans and language models diverge when predicting repeating text [52.03471802608112]
We present a scenario in which the performance of humans and LMs diverges.
Human and GPT-2 LM predictions are strongly aligned in the first presentation of a text span, but their performance quickly diverges when memory begins to play a role.
We hope that this scenario will spur future work in bringing LMs closer to human behavior.
arXiv Detail & Related papers (2023-10-10T08:24:28Z) - Large Language Models Can Infer Psychological Dispositions of Social Media Users [1.0923877073891446]
We test whether GPT-3.5 and GPT-4 can derive the Big Five personality traits from users' Facebook status updates in a zero-shot learning scenario.
Our results show an average correlation of r =.29 (range = [.22,.33]) between LLM-inferred and self-reported trait scores.
predictions were found to be more accurate for women and younger individuals on several traits, suggesting a potential bias stemming from the underlying training data or differences in online self-expression.
arXiv Detail & Related papers (2023-09-13T01:27:48Z) - Comparing Psychometric and Behavioral Predictors of Compliance During
Human-AI Interactions [5.893351309010412]
A common hypothesis in adaptive AI research is that minor differences in people's predisposition to trust can significantly impact their likelihood of complying with recommendations from the AI.
We benchmark a popular measure of this kind against behavioral predictors of compliance.
This suggests a general property that individual differences in initial behavior are more predictive than differences in self-reported trust attitudes.
arXiv Detail & Related papers (2023-02-03T16:56:25Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Expertise and confidence explain how social influence evolves along
intellective tasks [10.525352489242396]
We study interpersonal influence in small groups of individuals who collectively execute a sequence of intellective tasks.
We report empirical evidence on theories of transactive memory systems, social comparison, and confidences on the origins of social influence.
We propose a cognitive dynamical model inspired by these theories to describe the process by which individuals adjust interpersonal influences over time.
arXiv Detail & Related papers (2020-11-13T23:48:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.