ChatGPT vs Social Surveys: Probing Objective and Subjective Silicon Population
- URL: http://arxiv.org/abs/2409.02601v3
- Date: Thu, 06 Mar 2025 03:37:06 GMT
- Title: ChatGPT vs Social Surveys: Probing Objective and Subjective Silicon Population
- Authors: Muzhi Zhou, Lu Yu, Xiaomin Geng, Lan Luo,
- Abstract summary: Large Language Models (LLMs) have the potential to simulate human responses in social surveys and generate reliable predictions.<n>We employ repeated random sampling to create sampling distributions that identify the population parameters of silicon samples generated by GPT.<n>Our findings show that GPT's demographic distribution aligns with the 2020 U.S. population in terms of gender and average age.<n> GPT's point estimates for attitudinal scores are highly inconsistent and show no clear inclination toward any particular ideology.
- Score: 7.281887764378982
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent discussions about Large Language Models (LLMs) indicate that they have the potential to simulate human responses in social surveys and generate reliable predictions, such as those found in political polls. However, the existing findings are highly inconsistent, leaving us uncertain about the population characteristics of data generated by LLMs. In this paper, we employ repeated random sampling to create sampling distributions that identify the population parameters of silicon samples generated by GPT. Our findings show that GPT's demographic distribution aligns with the 2020 U.S. population in terms of gender and average age. However, GPT significantly overestimates the representation of the Black population and individuals with higher levels of education, even when it possesses accurate knowledge. Furthermore, GPT's point estimates for attitudinal scores are highly inconsistent and show no clear inclination toward any particular ideology. The sample response distributions exhibit a normal pattern that diverges significantly from those of human respondents. Consistent with previous studies, we find that GPT's answers are more deterministic than those of humans. We conclude by discussing the concerning implications of this biased and deterministic silicon population for making inferences about real-world populations.
Related papers
- ChatGPT is not A Man but Das Man: Representativeness and Structural Consistency of Silicon Samples Generated by Large Language Models [4.066868402300836]
Large language models (LLMs) are proposed as "silicon samples" for simulating human opinions.<n>This study examines this notion, arguing that LLMs may misrepresent population-level opinions.
arXiv Detail & Related papers (2025-06-25T12:35:44Z) - Language Model Fine-Tuning on Scaled Survey Data for Predicting Distributions of Public Opinions [4.020002996724124]
Large language models (LLMs) predict survey responses in advance during the early stages of survey design.
We propose directly fine-tuning LLMs to predict response distributions by leveraging unique structural characteristics of survey data.
We show that fine-tuning on SubPOP greatly improves the match between LLM predictions and human responses across various subpopulations.
arXiv Detail & Related papers (2025-02-24T00:31:33Z) - Human Preferences in Large Language Model Latent Space: A Technical Analysis on the Reliability of Synthetic Data in Voting Outcome Prediction [5.774786149181393]
We analyze how demographic attributes and prompt variations influence latent opinion mappings in large language models (LLMs)
We find that LLM-generated data fails to replicate the variance observed in real-world human responses.
In the political space, persona-to-party mappings exhibit limited differentiation, resulting in synthetic data that lacks the nuanced distribution of opinions found in survey data.
arXiv Detail & Related papers (2025-02-22T16:25:33Z) - Specializing Large Language Models to Simulate Survey Response Distributions for Global Populations [49.908708778200115]
We are the first to specialize large language models (LLMs) for simulating survey response distributions.
As a testbed, we use country-level results from two global cultural surveys.
We devise a fine-tuning method based on first-token probabilities to minimize divergence between predicted and actual response distributions.
arXiv Detail & Related papers (2025-02-10T21:59:27Z) - Vox Populi, Vox AI? Using Language Models to Estimate German Public Opinion [45.84205238554709]
We generate a synthetic sample of personas matching the individual characteristics of the 2017 German Longitudinal Election Study respondents.
We ask the LLM GPT-3.5 to predict each respondent's vote choice and compare these predictions to the survey-based estimates.
We find that GPT-3.5 does not predict citizens' vote choice accurately, exhibiting a bias towards the Green and Left parties.
arXiv Detail & Related papers (2024-07-11T14:52:18Z) - Evaluating LLMs for Gender Disparities in Notable Persons [0.40964539027092906]
This study examines the use of Large Language Models (LLMs) for retrieving factual information.
It addresses concerns over their propensity to produce factually incorrect "hallucinated" responses or to altogether decline to answer prompt at all.
arXiv Detail & Related papers (2024-03-14T07:58:27Z) - Random Silicon Sampling: Simulating Human Sub-Population Opinion Using a
Large Language Model Based on Group-Level Demographic Information [15.435605802794408]
Large language models exhibit societal biases associated with demographic information.
We propose "random silicon sampling," a method to emulate the opinions of the human population sub-group.
We find that language models can generate response distributions remarkably similar to the actual U.S. public opinion polls.
arXiv Detail & Related papers (2024-02-28T08:09:14Z) - Aligning with Whom? Large Language Models Have Gender and Racial Biases
in Subjective NLP Tasks [15.015148115215315]
We conduct experiments on four popular large language models (LLMs) to investigate their capability to understand group differences and potential biases in their predictions for politeness and offensiveness.
We find that for both tasks, model predictions are closer to the labels from White and female participants.
More specifically, when being prompted to respond from the perspective of "Black" and "Asian" individuals, models show lower performance in predicting both overall scores as well as the scores from corresponding groups.
arXiv Detail & Related papers (2023-11-16T10:02:24Z) - On the steerability of large language models toward data-driven personas [98.9138902560793]
Large language models (LLMs) are known to generate biased responses where the opinions of certain groups and populations are underrepresented.
Here, we present a novel approach to achieve controllable generation of specific viewpoints using LLMs.
arXiv Detail & Related papers (2023-11-08T19:01:13Z) - Do LLMs exhibit human-like response biases? A case study in survey
design [66.1850490474361]
We investigate the extent to which large language models (LLMs) reflect human response biases, if at all.
We design a dataset and framework to evaluate whether LLMs exhibit human-like response biases in survey questionnaires.
Our comprehensive evaluation of nine models shows that popular open and commercial LLMs generally fail to reflect human-like behavior.
arXiv Detail & Related papers (2023-11-07T15:40:43Z) - Questioning the Survey Responses of Large Language Models [18.61486375469644]
We critically examine language models' survey responses on the basis of the well-established American Community Survey by the U.S. Census Bureau.
We find that models' responses are governed by ordering and labeling biases, leading to variations across models that do not persist after adjusting for systematic biases.
Our findings suggest caution in treating models' survey responses as equivalent to those of human populations.
arXiv Detail & Related papers (2023-06-13T17:48:27Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - Mapping Urban Population Growth from Sentinel-2 MSI and Census Data
Using Deep Learning: A Case Study in Kigali, Rwanda [0.19116784879310023]
We evaluate how deep learning change detection techniques can unravel temporal population dynamics short intervals.
A ResNet encoder, pretrained on a population mapping task with Sentinel-2 MSI data, was incorporated into a Siamese network.
The network was trained at the census level to accurately predict population change.
arXiv Detail & Related papers (2023-03-15T10:39:31Z) - Can ChatGPT Assess Human Personalities? A General Evaluation Framework [70.90142717649785]
Large Language Models (LLMs) have produced impressive results in various areas, but their potential human-like psychology is still largely unexplored.
This paper presents a generic evaluation framework for LLMs to assess human personalities based on Myers Briggs Type Indicator (MBTI) tests.
arXiv Detail & Related papers (2023-03-01T06:16:14Z) - Social Biases in Automatic Evaluation Metrics for NLG [53.76118154594404]
We propose an evaluation method based on Word Embeddings Association Test (WEAT) and Sentence Embeddings Association Test (SEAT) to quantify social biases in evaluation metrics.
We construct gender-swapped meta-evaluation datasets to explore the potential impact of gender bias in image caption and text summarization tasks.
arXiv Detail & Related papers (2022-10-17T08:55:26Z) - Naturalistic Causal Probing for Morpho-Syntax [76.83735391276547]
We suggest a naturalistic strategy for input-level intervention on real world data in Spanish.
Using our approach, we isolate morpho-syntactic features from counfounders in sentences.
We apply this methodology to analyze causal effects of gender and number on contextualized representations extracted from pre-trained models.
arXiv Detail & Related papers (2022-05-14T11:47:58Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Magnify Your Population: Statistical Downscaling to Augment the Spatial
Resolution of Socioeconomic Census Data [48.7576911714538]
We present a new statistical downscaling approach to derive fine-scale estimates of key socioeconomic attributes.
For each selected socioeconomic variable, a Random Forest model is trained on the source Census units and then used to generate fine-scale gridded predictions.
As a case study, we apply this method to Census data in the United States, downscaling the selected socioeconomic variables available at the block group level, to a grid of 300 spatial resolution.
arXiv Detail & Related papers (2020-06-23T16:52:18Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z) - Towards Controllable Biases in Language Generation [87.89632038677912]
We develop a method to induce societal biases in generated text when input prompts contain mentions of specific demographic groups.
We analyze two scenarios: 1) inducing negative biases for one demographic and positive biases for another demographic, and 2) equalizing biases between demographics.
arXiv Detail & Related papers (2020-05-01T08:25:11Z) - Survival Cluster Analysis [93.50540270973927]
There is an unmet need in survival analysis for identifying subpopulations with distinct risk profiles.
An approach that addresses this need is likely to improve characterization of individual outcomes.
arXiv Detail & Related papers (2020-02-29T22:41:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.