Indirect Dynamic Negotiation in the Nash Demand Game
- URL: http://arxiv.org/abs/2409.06566v1
- Date: Tue, 10 Sep 2024 14:58:00 GMT
- Title: Indirect Dynamic Negotiation in the Nash Demand Game
- Authors: Tatiana V. Guy, Jitka Homolová, Aleksej Gaj,
- Abstract summary: We proposed a decision model that helps agents to successfully bargain by performing indirect negotiation and learning the opponent's model.
We illustrate our approach by applying our model to the Nash demand game, which is an abstract model of bargaining.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper addresses a problem of sequential bilateral bargaining with incomplete information. We proposed a decision model that helps agents to successfully bargain by performing indirect negotiation and learning the opponent's model. Methodologically the paper casts heuristically-motivated bargaining of a self-interested independent player into a framework of Bayesian learning and Markov decision processes. The special form of the reward implicitly motivates the players to negotiate indirectly, via closed-loop interaction. We illustrate the approach by applying our model to the Nash demand game, which is an abstract model of bargaining. The results indicate that the established negotiation: i) leads to coordinating players' actions; ii) results in maximising success rate of the game and iii) brings more individual profit to the players.
Related papers
- A Dialogue Game for Eliciting Balanced Collaboration [64.61707514432533]
We present a two-player 2D object placement game in which the players must negotiate the goal state themselves.
We show empirically that human players exhibit a variety of role distributions, and that balanced collaboration improves task performance.
arXiv Detail & Related papers (2024-06-12T13:35:10Z) - Assistive Large Language Model Agents for Socially-Aware Negotiation Dialogues [47.977032883078664]
We develop assistive agents based on Large Language Models (LLMs)
We simulate business negotiations by letting two LLM-based agents engage in role play.
A third LLM acts as a remediator agent to rewrite utterances violating norms for improving negotiation outcomes.
arXiv Detail & Related papers (2024-01-29T09:07:40Z) - It Takes Two to Negotiate: Modeling Social Exchange in Online
Multiplayer Games [14.109494237243762]
This work studies online player interactions during the turn-based strategy game, Diplomacy.
We annotated a dataset of over 10,000 chat messages for different negotiation strategies.
arXiv Detail & Related papers (2023-11-15T03:21:04Z) - Be Selfish, But Wisely: Investigating the Impact of Agent Personality in
Mixed-Motive Human-Agent Interactions [24.266490660606497]
We find that self-play RL fails to learn the value of compromise in a negotiation.
We modify the training procedure in two novel ways to design agents with diverse personalities and analyze their performance with human partners.
We find that although both techniques show promise, a selfish agent, which maximizes its own performance while also avoiding walkaways, performs superior to other variants by implicitly learning to generate value for both itself and the negotiation partner.
arXiv Detail & Related papers (2023-10-22T20:31:35Z) - Language of Bargaining [60.218128617765046]
We build a novel dataset for studying how the use of language shapes bilateral bargaining.
Our work also reveals linguistic signals that are predictive of negotiation outcomes.
arXiv Detail & Related papers (2023-06-12T13:52:01Z) - Improving Language Model Negotiation with Self-Play and In-Context
Learning from AI Feedback [97.54519989641388]
We study whether multiple large language models (LLMs) can autonomously improve each other in a negotiation game by playing, reflecting, and criticizing.
Only a subset of the language models we consider can self-play and improve the deal price from AI feedback.
arXiv Detail & Related papers (2023-05-17T11:55:32Z) - Moody Learners -- Explaining Competitive Behaviour of Reinforcement
Learning Agents [65.2200847818153]
In a competitive scenario, the agent does not only have a dynamic environment but also is directly affected by the opponents' actions.
Observing the Q-values of the agent is usually a way of explaining its behavior, however, do not show the temporal-relation between the selected actions.
arXiv Detail & Related papers (2020-07-30T11:30:42Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - Numerical Abstract Persuasion Argumentation for Expressing Concurrent
Multi-Agent Negotiations [3.7311680121118336]
A negotiation process by 2 agents e1 and e2 can be interleaved by another negotiation process between, say, e1 and e3.
Existing proposals for argumentation-based negotiations have focused primarily on two-agent bilateral negotiations.
We show that the extended theory adapts well to concurrent multi-agent negotiations over scarce resources.
arXiv Detail & Related papers (2020-01-23T01:46:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.