Actor Identification in Discourse: A Challenge for LLMs?
- URL: http://arxiv.org/abs/2402.00620v1
- Date: Thu, 1 Feb 2024 14:30:39 GMT
- Title: Actor Identification in Discourse: A Challenge for LLMs?
- Authors: Ana Bari\'c and Sean Papay and Sebastian Pad\'o
- Abstract summary: We show how to identify political actors who put forward claims in public debate.
We compare a traditional pipeline of dedicated NLP components with a LLM.
We find that the LLM is very good at identifying the right reference, but struggles to generate the correct canonical form.
- Score: 2.8728982844941187
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The identification of political actors who put forward claims in public
debate is a crucial step in the construction of discourse networks, which are
helpful to analyze societal debates. Actor identification is, however, rather
challenging: Often, the locally mentioned speaker of a claim is only a pronoun
("He proposed that [claim]"), so recovering the canonical actor name requires
discourse understanding. We compare a traditional pipeline of dedicated NLP
components (similar to those applied to the related task of coreference) with a
LLM, which appears a good match for this generation task. Evaluating on a
corpus of German actors in newspaper reports, we find surprisingly that the LLM
performs worse. Further analysis reveals that the LLM is very good at
identifying the right reference, but struggles to generate the correct
canonical form. This points to an underlying issue in LLMs with controlling
generated output. Indeed, a hybrid model combining the LLM with a classifier to
normalize its output substantially outperforms both initial models.
Related papers
- Arbiters of Ambivalence: Challenges of Using LLMs in No-Consensus Tasks [52.098988739649705]
This study examines the biases and limitations of LLMs in three roles: answer generator, judge, and debater.<n>We develop a no-consensus'' benchmark by curating examples that encompass a variety of a priori ambivalent scenarios.<n>Our results show that while LLMs can provide nuanced assessments when generating open-ended answers, they tend to take a stance on no-consensus topics when employed as judges or debaters.
arXiv Detail & Related papers (2025-05-28T01:31:54Z) - Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - From Test-Taking to Test-Making: Examining LLM Authoring of Commonsense Assessment Items [0.18416014644193068]
We consider LLMs as authors of commonsense assessment items.
We prompt LLMs to generate items in the style of a prominent benchmark for commonsense reasoning.
We find that LLMs that succeed in answering the original COPA benchmark are also more successful in authoring their own items.
arXiv Detail & Related papers (2024-10-18T22:42:23Z) - Order Matters in Hallucination: Reasoning Order as Benchmark and Reflexive Prompting for Large-Language-Models [0.0]
Large language models (LLMs) have generated significant attention since their inception, finding applications across various academic and industrial domains.
LLMs often suffer from the "hallucination problem", where outputs, though grammatically and logically coherent, lack factual accuracy or are entirely fabricated.
arXiv Detail & Related papers (2024-08-09T14:34:32Z) - Intermittent Semi-working Mask: A New Masking Paradigm for LLMs [13.271151693864114]
Multi-turn dialogues are a key interaction method between humans and Large Language Models (LLMs)
We propose a novel masking scheme called Intermittent Semi-working Mask (ISM) to address these problems.
arXiv Detail & Related papers (2024-08-01T13:22:01Z) - Analyzing the Role of Semantic Representations in the Era of Large Language Models [104.18157036880287]
We investigate the role of semantic representations in the era of large language models (LLMs)
We propose an AMR-driven chain-of-thought prompting method, which we call AMRCoT.
We find that it is difficult to predict which input examples AMR may help or hurt on, but errors tend to arise with multi-word expressions.
arXiv Detail & Related papers (2024-05-02T17:32:59Z) - AlignedCoT: Prompting Large Language Models via Native-Speaking Demonstrations [52.43593893122206]
Alignedcot is an in-context learning technique for invoking Large Language Models.
It achieves consistent and correct step-wise prompts in zero-shot scenarios.
We conduct experiments on mathematical reasoning and commonsense reasoning.
arXiv Detail & Related papers (2023-11-22T17:24:21Z) - Sentiment Analysis through LLM Negotiations [58.67939611291001]
A standard paradigm for sentiment analysis is to rely on a singular LLM and makes the decision in a single round.
This paper introduces a multi-LLM negotiation framework for sentiment analysis.
arXiv Detail & Related papers (2023-11-03T12:35:29Z) - Using Large Language Models for Qualitative Analysis can Introduce
Serious Bias [0.09208007322096534]
Large Language Models (LLMs) are quickly becoming ubiquitous, but the implications for social science research are not yet well understood.
This paper asks whether LLMs can help us analyse large-N qualitative data from open-ended interviews, with an application to transcripts of interviews with Rohingya refugees in Cox's Bazaar, Bangladesh.
We find that a great deal of caution is needed in using LLMs to annotate text as there is a risk of introducing biases that can lead to misleading inferences.
arXiv Detail & Related papers (2023-09-29T11:19:15Z) - Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate [85.3444184685235]
We propose a Multi-Agent Debate (MAD) framework, in which multiple agents express their arguments in the state of "tit for tat" and a judge manages the debate process to obtain a final solution.
Our framework encourages divergent thinking in LLMs which would be helpful for tasks that require deep levels of contemplation.
arXiv Detail & Related papers (2023-05-30T15:25:45Z) - In-Context Impersonation Reveals Large Language Models' Strengths and
Biases [56.61129643802483]
We ask LLMs to assume different personas before solving vision and language tasks.
We find that LLMs pretending to be children of different ages recover human-like developmental stages.
In a language-based reasoning task, we find that LLMs impersonating domain experts perform better than LLMs impersonating non-domain experts.
arXiv Detail & Related papers (2023-05-24T09:13:15Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.