Negotiating with LLMS: Prompt Hacks, Skill Gaps, and Reasoning Deficits
- URL: http://arxiv.org/abs/2312.03720v1
- Date: Sun, 26 Nov 2023 08:44:58 GMT
- Title: Negotiating with LLMS: Prompt Hacks, Skill Gaps, and Reasoning Deficits
- Authors: Johannes Schneider, Steffi Haag, Leona Chandra Kruse
- Abstract summary: We conduct a user study engaging over 40 individuals across all age groups in price negotiations with an LLM.
We show that the negotiated prices humans manage to achieve span a broad range, which points to a literacy gap in effectively interacting with LLMs.
- Score: 1.4003044924094596
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models LLMs like ChatGPT have reached the 100 Mio user barrier
in record time and might increasingly enter all areas of our life leading to a
diverse set of interactions between those Artificial Intelligence models and
humans. While many studies have discussed governance and regulations
deductively from first-order principles, few studies provide an inductive,
data-driven lens based on observing dialogues between humans and LLMs
especially when it comes to non-collaborative, competitive situations that have
the potential to pose a serious threat to people. In this work, we conduct a
user study engaging over 40 individuals across all age groups in price
negotiations with an LLM. We explore how people interact with an LLM,
investigating differences in negotiation outcomes and strategies. Furthermore,
we highlight shortcomings of LLMs with respect to their reasoning capabilities
and, in turn, susceptiveness to prompt hacking, which intends to manipulate the
LLM to make agreements that are against its instructions or beyond any
rationality. We also show that the negotiated prices humans manage to achieve
span a broad range, which points to a literacy gap in effectively interacting
with LLMs.
Related papers
- FAC$^2$E: Better Understanding Large Language Model Capabilities by
Dissociating Language and Cognition [57.747888532651]
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks.
We present FAC$2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
arXiv Detail & Related papers (2024-02-29T21:05:37Z) - Speak Out of Turn: Safety Vulnerability of Large Language Models in
Multi-turn Dialogue [10.703193963273128]
Large Language Models (LLMs) have been demonstrated to generate illegal or unethical responses.
This paper argues that humans could exploit multi-turn dialogue to induce LLMs into generating harmful information.
arXiv Detail & Related papers (2024-02-27T07:11:59Z) - Large Language Models: A Survey [69.72787936480394]
Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks.
LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data.
arXiv Detail & Related papers (2024-02-09T05:37:09Z) - How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis [50.15061156253347]
Negotiation is the basis of social interactions; humans negotiate everything from the price of cars to how to share common resources.
With rapidly growing interest in using large language models (LLMs) to act as agents on behalf of human users, such LLM agents would also need to be able to negotiate.
We develop NegotiationArena: a flexible framework for evaluating and probing the negotiation abilities of LLM agents.
arXiv Detail & Related papers (2024-02-08T17:51:48Z) - Empowering Language Models with Active Inquiry for Deeper Understanding [31.11672018840381]
We introduce LaMAI (Language Model with Active Inquiry), designed to endow large language models with interactive engagement.
LaMAI uses active learning techniques to raise the most informative questions, fostering a dynamic bidirectional dialogue.
Our empirical studies, across a variety of complex datasets, demonstrate the effectiveness of LaMAI.
arXiv Detail & Related papers (2024-02-06T05:24:16Z) - AlignedCoT: Prompting Large Language Models via Native-Speaking Demonstrations [52.43593893122206]
AlignedCoT is an in-context learning technique for invoking Large Language Models.
It achieves consistent and correct step-wise prompts in zero-shot scenarios.
We conduct experiments on mathematical reasoning and commonsense reasoning.
arXiv Detail & Related papers (2023-11-22T17:24:21Z) - Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations [70.7884839812069]
Large language models (LLMs) have emerged as powerful and general solutions to many natural language tasks.
However, many of the most important applications of language generation are interactive, where an agent has to talk to a person to reach a desired outcome.
In this work, we explore a new method for adapting LLMs with RL for such goal-directed dialogue.
arXiv Detail & Related papers (2023-11-09T18:45:16Z) - DialogueLLM: Context and Emotion Knowledge-Tuned Large Language Models
for Emotion Recognition in Conversations [28.15933355881604]
Large language models (LLMs) have shown extraordinary efficacy across numerous downstream natural language processing (NLP) tasks.
We propose DialogueLLM, a context and emotion knowledge tuned LLM that is obtained by fine-tuning LLaMA models.
We offer a comprehensive evaluation of our proposed model on three benchmarking emotion recognition in conversations datasets.
arXiv Detail & Related papers (2023-10-17T16:15:34Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.