Arti-"fickle" Intelligence: Using LLMs as a Tool for Inference in the Political and Social Sciences
- URL: http://arxiv.org/abs/2504.03822v1
- Date: Fri, 04 Apr 2025 17:35:45 GMT
- Title: Arti-"fickle" Intelligence: Using LLMs as a Tool for Inference in the Political and Social Sciences
- Authors: Lisa P. Argyle, Ethan C. Busby, Joshua R. Gubler, Bryce Hepner, Alex Lyman, David Wingate,
- Abstract summary: Generative large language models (LLMs) are incredibly useful, versatile, and promising tools.<n>They will be of most use to political and social science researchers when they are used in a way that advances understanding about real human behaviors and concerns.<n>We suggest that researchers in the political and social sciences need to remain focused on the scientific goal of inference.
- Score: 4.051777802443125
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Generative large language models (LLMs) are incredibly useful, versatile, and promising tools. However, they will be of most use to political and social science researchers when they are used in a way that advances understanding about real human behaviors and concerns. To promote the scientific use of LLMs, we suggest that researchers in the political and social sciences need to remain focused on the scientific goal of inference. To this end, we discuss the challenges and opportunities related to scientific inference with LLMs, using validation of model output as an illustrative case for discussion. We propose a set of guidelines related to establishing the failure and success of LLMs when completing particular tasks, and discuss how we can make inferences from these observations. We conclude with a discussion of how this refocus will improve the accumulation of shared scientific knowledge about these tools and their uses in the social sciences.
Related papers
- A Call for New Recipes to Enhance Spatial Reasoning in MLLMs [85.67171333213301]
Multimodal Large Language Models (MLLMs) have demonstrated impressive performance in general vision-language tasks.
Recent studies have exposed critical limitations in their spatial reasoning capabilities.
This deficiency in spatial reasoning significantly constrains MLLMs' ability to interact effectively with the physical world.
arXiv Detail & Related papers (2025-04-21T11:48:39Z) - LLM Social Simulations Are a Promising Research Method [4.6456873975541635]
We argue that the promise of large language model (LLM) social simulations can be achieved by addressing five tractable challenges.<n>We believe that LLM social simulations can already be used for exploratory research, such as pilot experiments for psychology, economics, sociology, and marketing.
arXiv Detail & Related papers (2025-04-03T03:01:26Z) - Position: Multimodal Large Language Models Can Significantly Advance Scientific Reasoning [51.11965014462375]
Multimodal Large Language Models (MLLMs) integrate text, images, and other modalities.<n>This paper argues that MLLMs can significantly advance scientific reasoning across disciplines such as mathematics, physics, chemistry, and biology.
arXiv Detail & Related papers (2025-02-05T04:05:27Z) - Political-LLM: Large Language Models in Political Science [159.95299889946637]
Large language models (LLMs) have been widely adopted in political science tasks.<n>Political-LLM aims to advance the comprehensive understanding of integrating LLMs into computational political science.
arXiv Detail & Related papers (2024-12-09T08:47:50Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Intelligent Computing Social Modeling and Methodological Innovations in Political Science in the Era of Large Language Models [16.293574791587247]
This paper proposes the "Intelligent Computing Social Modeling" (ICSM) method to address these issues.
By simulating the U.S. presidential election, this study empirically demonstrates the operational pathways and methodological advantages of ICSM.
The findings suggest that LLMs will drive methodological innovation in political science through integration and improvement rather than direct substitution.
arXiv Detail & Related papers (2024-10-07T06:30:59Z) - A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery [68.48094108571432]
Large language models (LLMs) have revolutionized the way text and other modalities of data are handled.
We aim to provide a more holistic view of the research landscape by unveiling cross-field and cross-modal connections between scientific LLMs.
arXiv Detail & Related papers (2024-06-16T08:03:24Z) - GPT-ology, Computational Models, Silicon Sampling: How should we think about LLMs in Cognitive Science? [4.242435932138821]
We review several emerging research paradigms -- GPT-ology, LLMs-as-computational-models, and silicon sampling"
We highlight several outstanding issues about LLMs that have to be addressed to push our science forward.
arXiv Detail & Related papers (2024-06-13T04:19:17Z) - How should the advent of large language models affect the practice of
science? [51.62881233954798]
How should the advent of large language models affect the practice of science?
We have invited four diverse groups of scientists to reflect on this query, sharing their perspectives and engaging in debate.
arXiv Detail & Related papers (2023-12-05T10:45:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.