Modelling Political Coalition Negotiations Using LLM-based Agents
- URL: http://arxiv.org/abs/2402.11712v1
- Date: Sun, 18 Feb 2024 21:28:06 GMT
- Title: Modelling Political Coalition Negotiations Using LLM-based Agents
- Authors: Farhad Moghimifar, Yuan-Fang Li, Robert Thomson, Gholamreza Haffari
- Abstract summary: We introduce coalition negotiations as a novel NLP task, and model it as a negotiation between large language model-based agents.
We introduce a multilingual dataset, POLCA, comprising manifestos of European political parties and coalition agreements over a number of elections in these countries.
We propose a hierarchical Markov decision process designed to simulate the process of coalition negotiation between political parties and predict the outcomes.
- Score: 53.934372246390495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Coalition negotiations are a cornerstone of parliamentary democracies,
characterised by complex interactions and strategic communications among
political parties. Despite its significance, the modelling of these
negotiations has remained unexplored with the domain of Natural Language
Processing (NLP), mostly due to lack of proper data. In this paper, we
introduce coalition negotiations as a novel NLP task, and model it as a
negotiation between large language model-based agents. We introduce a
multilingual dataset, POLCA, comprising manifestos of European political
parties and coalition agreements over a number of elections in these countries.
This dataset addresses the challenge of the current scope limitations in
political negotiation modelling by providing a diverse, real-world basis for
simulation. Additionally, we propose a hierarchical Markov decision process
designed to simulate the process of coalition negotiation between political
parties and predict the outcomes. We evaluate the performance of
state-of-the-art large language models (LLMs) as agents in handling coalition
negotiations, offering insights into their capabilities and paving the way for
future advancements in political modelling.
Related papers
- Conditional Language Policy: A General Framework for Steerable Multi-Objective Finetuning [72.46388818127105]
Conditional Language Policy (CLP) is a framework for finetuning language models on multiple objectives.
We show that CLP learns steerable models that effectively trade-off conflicting objectives at inference time.
arXiv Detail & Related papers (2024-07-22T16:13:38Z) - Llama meets EU: Investigating the European Political Spectrum through the Lens of LLMs [18.836470390824633]
We audit Llama Chat in the context of EU politics to analyze the model's political knowledge and its ability to reason in context.
We adapt, i.e., further fine-tune, Llama Chat on speeches of individual euro-parties from debates in the European Parliament to reevaluate its political leaning.
arXiv Detail & Related papers (2024-03-20T13:42:57Z) - Assistive Large Language Model Agents for Socially-Aware Negotiation Dialogues [47.977032883078664]
We develop assistive agents based on Large Language Models (LLMs)
We simulate business negotiations by letting two LLM-based agents engage in role play.
A third LLM acts as a remediator agent to rewrite utterances violating norms for improving negotiation outcomes.
arXiv Detail & Related papers (2024-01-29T09:07:40Z) - Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies [5.958974943807783]
This study addresses the challenge of understanding political bias in digitized discourse using Large Language Models (LLMs)
We present a comprehensive analytical framework, consisting of Partisan Bias Divergence Assessment and Partisan Class Tendency Prediction.
Our findings reveal the model's effectiveness in capturing emotional and moral nuances, albeit with some challenges in stance detection.
arXiv Detail & Related papers (2023-11-16T08:57:53Z) - Plug-and-Play Policy Planner for Large Language Model Powered Dialogue
Agents [121.46051697742608]
We introduce a new dialogue policy planning paradigm to strategize dialogue problems with a tunable language model plug-in named PPDPP.
Specifically, we develop a novel training framework to facilitate supervised fine-tuning over available human-annotated data.
PPDPP consistently and substantially outperforms existing approaches on three different proactive dialogue applications.
arXiv Detail & Related papers (2023-11-01T03:20:16Z) - INA: An Integrative Approach for Enhancing Negotiation Strategies with
Reward-Based Dialogue System [22.392304683798866]
We propose a novel negotiation dialogue agent designed for the online marketplace.
We employ a set of novel rewards, specifically tailored for the negotiation task to train our Negotiation Agent.
Our results demonstrate that the proposed approach and reward system significantly enhance the agent's negotiation capabilities.
arXiv Detail & Related papers (2023-10-27T15:31:16Z) - Language of Bargaining [60.218128617765046]
We build a novel dataset for studying how the use of language shapes bilateral bargaining.
Our work also reveals linguistic signals that are predictive of negotiation outcomes.
arXiv Detail & Related papers (2023-06-12T13:52:01Z) - Modeling Long Context for Task-Oriented Dialogue State Generation [51.044300192906995]
We propose a multi-task learning model with a simple yet effective utterance tagging technique and a bidirectional language model.
Our approaches attempt to solve the problem that the performance of the baseline significantly drops when the input dialogue context sequence is long.
In our experiments, our proposed model achieves a 7.03% relative improvement over the baseline, establishing a new state-of-the-art joint goal accuracy of 52.04% on the MultiWOZ 2.0 dataset.
arXiv Detail & Related papers (2020-04-29T11:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.