AgreeMate: Teaching LLMs to Haggle
- URL: http://arxiv.org/abs/2412.18690v1
- Date: Tue, 24 Dec 2024 21:57:17 GMT
- Title: AgreeMate: Teaching LLMs to Haggle
- Authors: Ainesh Chatterjee, Samuel Miller, Nithin Parepally,
- Abstract summary: We introduce AgreeMate, a framework for training Large Language Models to perform strategic price negotiations through natural language.
We present the performance of Large Language Models when used as agents within a decoupled (modular) bargaining architecture.
- Score: 0.08192907805418582
- License:
- Abstract: We introduce AgreeMate, a framework for training Large Language Models (LLMs) to perform strategic price negotiations through natural language. We apply recent advances to a negotiation setting where two agents (i.e. buyer or seller) use natural language to bargain on goods using coarse actions. Specifically, we present the performance of Large Language Models when used as agents within a decoupled (modular) bargaining architecture. We demonstrate that using prompt engineering, fine-tuning, and chain-of-thought prompting enhances model performance, as defined by novel metrics. We use attention probing to show model attention to semantic relationships between tokens during negotiations.
Related papers
- Assistive Large Language Model Agents for Socially-Aware Negotiation Dialogues [47.977032883078664]
We develop assistive agents based on Large Language Models (LLMs) that aid interlocutors in business negotiations.
A third LLM acts as a remediator agent to rewrite utterances violating norms for improving negotiation outcomes.
We provide rich empirical evidence to demonstrate its effectiveness in negotiations across three different negotiation topics.
arXiv Detail & Related papers (2024-01-29T09:07:40Z) - Baichuan2-Sum: Instruction Finetune Baichuan2-7B Model for Dialogue Summarization [12.45299260235282]
We propose an instruction fine-tuning model: Baichuan2-Sum, for role-oriented diaglouge summarization.
By setting different instructions for different roles, the model can learn from the dialogue interactions and output the expected summaries.
Experiments demonstrate that the proposed model achieves the new state-of-the-art results on two public dialogue summarization datasets.
arXiv Detail & Related papers (2024-01-27T20:20:39Z) - Cooperation, Competition, and Maliciousness: LLM-Stakeholders Interactive Negotiation [52.930183136111864]
We propose using scorable negotiation to evaluate Large Language Models (LLMs)
To reach an agreement, agents must have strong arithmetic, inference, exploration, and planning capabilities.
We provide procedures to create new games and increase games' difficulty to have an evolving benchmark.
arXiv Detail & Related papers (2023-09-29T13:33:06Z) - Language of Bargaining [60.218128617765046]
We build a novel dataset for studying how the use of language shapes bilateral bargaining.
Our work also reveals linguistic signals that are predictive of negotiation outcomes.
arXiv Detail & Related papers (2023-06-12T13:52:01Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Universal Sentence Representation Learning with Conditional Masked
Language Model [7.334766841801749]
We present Conditional Masked Language Modeling (M) to effectively learn sentence representations.
Our English CMLM model achieves state-of-the-art performance on SentEval.
As a fully unsupervised learning method, CMLM can be conveniently extended to a broad range of languages and domains.
arXiv Detail & Related papers (2020-12-28T18:06:37Z) - SLM: Learning a Discourse Language Representation with Sentence
Unshuffling [53.42814722621715]
We introduce Sentence-level Language Modeling, a new pre-training objective for learning a discourse language representation.
We show that this feature of our model improves the performance of the original BERT by large margins.
arXiv Detail & Related papers (2020-10-30T13:33:41Z) - InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language
Model Pre-Training [135.12061144759517]
We present an information-theoretic framework that formulates cross-lingual language model pre-training.
We propose a new pre-training task based on contrastive learning.
By leveraging both monolingual and parallel corpora, we jointly train the pretext to improve the cross-lingual transferability of pre-trained models.
arXiv Detail & Related papers (2020-07-15T16:58:01Z) - Exploring Early Prediction of Buyer-Seller Negotiation Outcomes [19.35826558501076]
We explore a novel task of early prediction of buyer-seller negotiation outcomes, by varying the fraction of utterances that the model can access.
We explore the feasibility of early prediction by using traditional feature-based methods, as well as by incorporating the non-linguistic task context into a pretrained language model.
arXiv Detail & Related papers (2020-04-06T00:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.