How to Use Large Language Models for Text Coding: The Case of Fatherhood
Roles in Public Policy Documents
- URL: http://arxiv.org/abs/2311.11844v2
- Date: Fri, 15 Dec 2023 17:18:48 GMT
- Title: How to Use Large Language Models for Text Coding: The Case of Fatherhood
Roles in Public Policy Documents
- Authors: Lorenzo Lupo, Oscar Magnusson, Dirk Hovy, Elin Naurin, Lena
W\"angnerud
- Abstract summary: Large language models (LLMs) have opened up new opportunities for text analysis in political science.
In this study, we evaluate LLMs on three original coding tasks of non-English political science texts.
- Score: 21.090506974145566
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in large language models (LLMs) like GPT-3 and GPT-4 have
opened up new opportunities for text analysis in political science. They
promise automation with better results and less programming. In this study, we
evaluate LLMs on three original coding tasks of non-English political science
texts, and we provide a detailed description of a general workflow for using
LLMs for text coding in political science research. Our use case offers a
practical guide for researchers looking to incorporate LLMs into their research
on text analysis. We find that, when provided with detailed label definitions
and coding examples, an LLM can be as good as or even better than a human
annotator while being much faster (up to hundreds of times), considerably
cheaper (costing up to 60% less than human coding), and much easier to scale to
large amounts of text. Overall, LLMs present a viable option for most text
coding projects.
Related papers
- Can LLMs Replace Humans During Code Chunking? [2.4056836012742]
Large language models (LLMs) have become essential tools in computer science, especially for tasks involving code understanding and generation.<n>This paper examines the application of LLMs in the modernization of legacy government code written in ALC and MUMPS.
arXiv Detail & Related papers (2025-06-24T13:02:35Z) - PDL: A Declarative Prompt Programming Language [1.715270928578365]
This paper introduces the Prompt Declaration Language (PDL)
PDL is a simple declarative data-oriented language that puts prompts at the forefront, based on YAML.
It supports writing interactive applications that call large language models (LLMs) and tools, and makes it easy to implement common use-cases such as chatbots, RAG, or agents.
arXiv Detail & Related papers (2024-10-24T20:07:08Z) - ReMoDetect: Reward Models Recognize Aligned LLM's Generations [55.06804460642062]
Large language models (LLMs) generate human-preferable texts.
In this paper, we identify the common characteristics shared by these models.
We propose two training schemes to further improve the detection ability of the reward model.
arXiv Detail & Related papers (2024-05-27T17:38:33Z) - Large Language Models: A Survey [69.72787936480394]
Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks.
LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data.
arXiv Detail & Related papers (2024-02-09T05:37:09Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - Large Language Model-Aware In-Context Learning for Code Generation [75.68709482932903]
Large language models (LLMs) have shown impressive in-context learning (ICL) ability in code generation.
We propose a novel learning-based selection approach named LAIL (LLM-Aware In-context Learning) for code generation.
arXiv Detail & Related papers (2023-10-15T06:12:58Z) - How to use LLMs for Text Analysis [0.0]
This guide introduces Large Language Models (LLM) as a highly versatile text analysis method within the social sciences.
As LLMs are easy-to-use, cheap, fast, and applicable on a broad range of text analysis tasks, many scholars believe that LLMs will transform how we do text analysis.
arXiv Detail & Related papers (2023-07-24T19:54:15Z) - Open-Source LLMs for Text Annotation: A Practical Guide for Model Setting and Fine-Tuning [5.822010906632045]
This paper studies the performance of open-source Large Language Models (LLMs) in text classification tasks typical for political science research.
By examining tasks like stance, topic, and relevance classification, we aim to guide scholars in making informed decisions about their use of LLMs for text analysis.
arXiv Detail & Related papers (2023-07-05T10:15:07Z) - PALR: Personalization Aware LLMs for Recommendation [7.407353565043918]
PALR aims to combine user history behaviors (such as clicks, purchases, ratings, etc.) with large language models (LLMs) to generate user preferred items.
Our solution outperforms state-of-the-art models on various sequential recommendation tasks.
arXiv Detail & Related papers (2023-05-12T17:21:33Z) - Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes [54.13559879916708]
EVAPORATE is a prototype system powered by large language models (LLMs)
Code synthesis is cheap, but far less accurate than directly processing each document with the LLM.
We propose an extended code implementation, EVAPORATE-CODE+, which achieves better quality than direct extraction.
arXiv Detail & Related papers (2023-04-19T06:00:26Z) - Low-code LLM: Graphical User Interface over Large Language Models [115.08718239772107]
This paper introduces a novel human-LLM interaction framework, Low-code LLM.
It incorporates six types of simple low-code visual programming interactions to achieve more controllable and stable responses.
We highlight three advantages of the low-code LLM: user-friendly interaction, controllable generation, and wide applicability.
arXiv Detail & Related papers (2023-04-17T09:27:40Z) - Can Large Language Models Transform Computational Social Science? [79.62471267510963]
Large Language Models (LLMs) are capable of performing many language processing tasks zero-shot (without training data)
This work provides a road map for using LLMs as Computational Social Science tools.
arXiv Detail & Related papers (2023-04-12T17:33:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.