Gricean Norms as a Basis for Effective Collaboration
- URL: http://arxiv.org/abs/2503.14484v1
- Date: Tue, 18 Mar 2025 17:54:14 GMT
- Title: Gricean Norms as a Basis for Effective Collaboration
- Authors: Fardin Saad, Pradeep K. Murukannaiah, Munindar P. Singh,
- Abstract summary: We propose a normative framework that integrates Gricean norms and cognitive frameworks into large language model (LLM) based agents.<n>Within this framework, we introduce Lamoids, GPT-4 powered agents designed to collaborate with humans.
- Score: 12.92528740921513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Effective human-AI collaboration hinges not only on the AI agent's ability to follow explicit instructions but also on its capacity to navigate ambiguity, incompleteness, invalidity, and irrelevance in communication. Gricean conversational and inference norms facilitate collaboration by aligning unclear instructions with cooperative principles. We propose a normative framework that integrates Gricean norms and cognitive frameworks -- common ground, relevance theory, and theory of mind -- into large language model (LLM) based agents. The normative framework adopts the Gricean maxims of quantity, quality, relation, and manner, along with inference, as Gricean norms to interpret unclear instructions, which are: ambiguous, incomplete, invalid, or irrelevant. Within this framework, we introduce Lamoids, GPT-4 powered agents designed to collaborate with humans. To assess the influence of Gricean norms in human-AI collaboration, we evaluate two versions of a Lamoid: one with norms and one without. In our experiments, a Lamoid collaborates with a human to achieve shared goals in a grid world (Doors, Keys, and Gems) by interpreting both clear and unclear natural language instructions. Our results reveal that the Lamoid with Gricean norms achieves higher task accuracy and generates clearer, more accurate, and contextually relevant responses than the Lamoid without norms. This improvement stems from the normative framework, which enhances the agent's pragmatic reasoning, fostering effective human-AI collaboration and enabling context-aware communication in LLM-based agents.
Related papers
- Vision-Language Models Are Not Pragmatically Competent in Referring Expression Generation [16.217818305338945]
Referring Expression Generation (REG) is a core task for evaluating the pragmatic competence of vision-language systems.
Current evaluations of vision-language models (VLMs) often overlook the pragmatic dimension.
We introduce a new dataset (RefOI) of 1.5k images annotated with both written and spoken referring expressions.
arXiv Detail & Related papers (2025-04-22T17:37:16Z) - Do LLMs trust AI regulation? Emerging behaviour of game-theoretic LLM agents [61.132523071109354]
This paper investigates the interplay between AI developers, regulators and users, modelling their strategic choices under different regulatory scenarios.
Our research identifies emerging behaviours of strategic AI agents, which tend to adopt more "pessimistic" stances than pure game-theoretic agents.
arXiv Detail & Related papers (2025-04-11T15:41:21Z) - EgoNormia: Benchmarking Physical Social Norm Understanding [52.87904722234434]
We present EgoNormia $|epsilon|$, consisting of 1,853 ego-centric videos of human interactions.<n>The normative actions encompass seven categories: safety, privacy, proxemics, politeness, cooperation, coordination/proactivity, and communication/legibility.<n>Our work demonstrates that current state-of-the-art vision-language models lack robust norm understanding, scoring a maximum of 45% on EgoNormia.
arXiv Detail & Related papers (2025-02-27T19:54:16Z) - EAGER-LLM: Enhancing Large Language Models as Recommenders through Exogenous Behavior-Semantic Integration [60.47645731801866]
Large language models (LLMs) are increasingly leveraged as foundational backbones in advanced recommender systems.<n>LLMs are pre-trained linguistic semantics but learn collaborative semantics from scratch via the llm-Backbone.<n>We propose EAGER-LLM, a decoder-only generative recommendation framework that integrates endogenous and endogenous behavioral and semantic information in a non-intrusive manner.
arXiv Detail & Related papers (2025-02-20T17:01:57Z) - Position: We Need An Adaptive Interpretation of Helpful, Honest, and Harmless Principles [24.448749292993234]
The Helpful, Honest, and Harmless (HHH) principle is a framework for aligning AI systems with human values.<n>We argue for an adaptive interpretation of the HHH principle and propose a reference framework for its adaptation to diverse scenarios.<n>This work offers practical insights for improving AI alignment, ensuring that HHH principles remain both grounded and operationally effective in real-world AI deployment.
arXiv Detail & Related papers (2025-02-09T22:41:24Z) - Language Models in Dialogue: Conversational Maxims for Human-AI Interactions [14.312321347152249]
We propose a set of maxims -- quantity, quality, relevance, manner, benevolence, and transparency -- for describing effective human-AI conversation.
We evaluate the degree to which various language models are able to understand these maxims and find that models possess an internal prioritization of principles that can significantly impact their ability to interpret the maxims accurately.
arXiv Detail & Related papers (2024-03-22T11:16:43Z) - The Machine Psychology of Cooperation: Can GPT models operationalise prompts for altruism, cooperation, competitiveness and selfishness in economic games? [0.0]
We investigated the capability of the GPT-3.5 large language model (LLM) to operationalize natural language descriptions of cooperative, competitive, altruistic, and self-interested behavior.
We used a prompt to describe the task environment using a similar protocol to that used in experimental psychology studies with human subjects.
Our results provide evidence that LLMs can, to some extent, translate natural language descriptions of different cooperative stances into corresponding descriptions of appropriate task behaviour.
arXiv Detail & Related papers (2023-05-13T17:23:16Z) - Principle-Driven Self-Alignment of Language Models from Scratch with
Minimal Human Supervision [84.31474052176343]
Recent AI-assistant agents, such as ChatGPT, rely on supervised fine-tuning (SFT) with human annotations and reinforcement learning from human feedback to align the output with human intentions.
This dependence can significantly constrain the true potential of AI-assistant agents due to the high cost of obtaining human supervision.
We propose a novel approach called SELF-ALIGN, which combines principle-driven reasoning and the generative power of LLMs for the self-alignment of AI agents with minimal human supervision.
arXiv Detail & Related papers (2023-05-04T17:59:28Z) - Language Instructed Reinforcement Learning for Human-AI Coordination [23.694362407434753]
We propose a novel framework, instructRL, that enables humans to specify what kind of strategies they expect from their AI partners through natural language instructions.
We show that instructRL converges to human-like policies that satisfy the given instructions in a proof-of-concept environment and the challenging Hanabi benchmark.
arXiv Detail & Related papers (2023-04-13T04:47:31Z) - NormSAGE: Multi-Lingual Multi-Cultural Norm Discovery from Conversations
On-the-Fly [61.77957329364812]
We introduce a framework for addressing the novel task of conversation-grounded multi-lingual, multi-cultural norm discovery.
NormSAGE elicits knowledge about norms through directed questions representing the norm discovery task and conversation context.
It further addresses the risk of language model hallucination with a self-verification mechanism ensuring that the norms discovered are correct.
arXiv Detail & Related papers (2022-10-16T18:30:05Z) - Iterated Reasoning with Mutual Information in Cooperative and Byzantine
Decentralized Teaming [0.0]
We show that reformulating an agent's policy to be conditional on the policies of its teammates inherently maximizes Mutual Information (MI) lower-bound when optimizing under Policy Gradient (PG)
Our approach, InfoPG, outperforms baselines in learning emergent collaborative behaviors and sets the state-of-the-art in decentralized cooperative MARL tasks.
arXiv Detail & Related papers (2022-01-20T22:54:32Z) - Normative Disagreement as a Challenge for Cooperative AI [56.34005280792013]
We argue that typical cooperation-inducing learning algorithms fail to cooperate in bargaining problems.
We develop a class of norm-adaptive policies and show in experiments that these significantly increase cooperation.
arXiv Detail & Related papers (2021-11-27T11:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.