Large Language Models as Subpopulation Representative Models: A Review
- URL: http://arxiv.org/abs/2310.17888v1
- Date: Fri, 27 Oct 2023 04:31:27 GMT
- Title: Large Language Models as Subpopulation Representative Models: A Review
- Authors: Gabriel Simmons and Christopher Hare
- Abstract summary: Large language models (LLMs) could be used to estimate subpopulation representative models (SRMs)
LLMs could provide an alternate or complementary way to measure public opinion among demographic, geographic, or political segments of the population.
- Score: 5.439020425819001
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Of the many commercial and scientific opportunities provided by large
language models (LLMs; including Open AI's ChatGPT, Meta's LLaMA, and
Anthropic's Claude), one of the more intriguing applications has been the
simulation of human behavior and opinion. LLMs have been used to generate human
simulcra to serve as experimental participants, survey respondents, or other
independent agents, with outcomes that often closely parallel the observed
behavior of their genuine human counterparts. Here, we specifically consider
the feasibility of using LLMs to estimate subpopulation representative models
(SRMs). SRMs could provide an alternate or complementary way to measure public
opinion among demographic, geographic, or political segments of the population.
However, the introduction of new technology to the socio-technical
infrastructure does not come without risk. We provide an overview of behavior
elicitation techniques for LLMs, and a survey of existing SRM implementations.
We offer frameworks for the analysis, development, and practical implementation
of LLMs as SRMs, consider potential risks, and suggest directions for future
work.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - HERM: Benchmarking and Enhancing Multimodal LLMs for Human-Centric Understanding [68.4046326104724]
We introduce HERM-Bench, a benchmark for evaluating the human-centric understanding capabilities of MLLMs.
Our work reveals the limitations of existing MLLMs in understanding complex human-centric scenarios.
We present HERM-100K, a comprehensive dataset with multi-level human-centric annotations, aimed at enhancing MLLMs' training.
arXiv Detail & Related papers (2024-10-09T11:14:07Z) - Agentic Society: Merging skeleton from real world and texture from Large Language Model [4.740886789811429]
This paper explores a novel framework that leverages census data and large language models to generate virtual populations.
We show that our method produces personas with variability essential for simulating diverse human behaviors in social science experiments.
But the evaluation result shows that only weak sign of statistical truthfulness can be produced due to limited capability of current LLMs.
arXiv Detail & Related papers (2024-09-02T08:28:19Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Large Language Models as Instruments of Power: New Regimes of Autonomous Manipulation and Control [0.0]
Large language models (LLMs) can reproduce a wide variety of rhetorical styles and generate text that expresses a broad spectrum of sentiments.
We consider a set of underestimated societal harms made possible by the rapid and largely unregulated adoption of LLMs.
arXiv Detail & Related papers (2024-05-06T19:52:57Z) - Explaining Large Language Models Decisions Using Shapley Values [1.223779595809275]
Large language models (LLMs) have opened up exciting possibilities for simulating human behavior and cognitive processes.
However, the validity of utilizing LLMs as stand-ins for human subjects remains uncertain.
This paper presents a novel approach based on Shapley values to interpret LLM behavior and quantify the relative contribution of each prompt component to the model's output.
arXiv Detail & Related papers (2024-03-29T22:49:43Z) - LLM-driven Imitation of Subrational Behavior : Illusion or Reality? [3.2365468114603937]
Existing work highlights the ability of Large Language Models to address complex reasoning tasks and mimic human communication.
We propose to investigate the use of LLMs to generate synthetic human demonstrations, which are then used to learn subrational agent policies.
We experimentally evaluate the ability of our framework to model sub-rationality through four simple scenarios.
arXiv Detail & Related papers (2024-02-13T19:46:39Z) - Systematic Biases in LLM Simulations of Debates [12.933509143906141]
We study the limitations of Large Language Models in simulating human interactions.
Our findings indicate a tendency for LLM agents to conform to the model's inherent social biases.
These results underscore the need for further research to develop methods that help agents overcome these biases.
arXiv Detail & Related papers (2024-02-06T14:51:55Z) - CoMPosT: Characterizing and Evaluating Caricature in LLM Simulations [61.9212914612875]
We present a framework to characterize LLM simulations using four dimensions: Context, Model, Persona, and Topic.
We use this framework to measure open-ended LLM simulations' susceptibility to caricature, defined via two criteria: individuation and exaggeration.
We find that for GPT-4, simulations of certain demographics (political and marginalized groups) and topics (general, uncontroversial) are highly susceptible to caricature.
arXiv Detail & Related papers (2023-10-17T18:00:25Z) - Survey of Social Bias in Vision-Language Models [65.44579542312489]
Survey aims to provide researchers with a high-level insight into the similarities and differences of social bias studies in pre-trained models across NLP, CV, and VL.
The findings and recommendations presented here can benefit the ML community, fostering the development of fairer and non-biased AI models.
arXiv Detail & Related papers (2023-09-24T15:34:56Z) - How Can Recommender Systems Benefit from Large Language Models: A Survey [82.06729592294322]
Large language models (LLM) have shown impressive general intelligence and human-like capabilities.
We conduct a comprehensive survey on this research direction from the perspective of the whole pipeline in real-world recommender systems.
arXiv Detail & Related papers (2023-06-09T11:31:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.