Learnable Strategies for Bilateral Agent Negotiation over Multiple
Issues
- URL: http://arxiv.org/abs/2009.08302v2
- Date: Fri, 7 Jan 2022 14:01:57 GMT
- Title: Learnable Strategies for Bilateral Agent Negotiation over Multiple
Issues
- Authors: Pallavi Bagga, Nicola Paoletti and Kostas Stathis
- Abstract summary: We present a novel bilateral negotiation model that allows a self-interested agent to learn how to negotiate over multiple issues.
The model relies upon interpretable strategy templates representing the tactics the agent should employ during the negotiation.
It learns template parameters to maximize the average utility received over multiple negotiations, thus resulting in optimal bid acceptance and generation.
- Score: 6.12762193927784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel bilateral negotiation model that allows a self-interested
agent to learn how to negotiate over multiple issues in the presence of user
preference uncertainty. The model relies upon interpretable strategy templates
representing the tactics the agent should employ during the negotiation and
learns template parameters to maximize the average utility received over
multiple negotiations, thus resulting in optimal bid acceptance and generation.
Our model also uses deep reinforcement learning to evaluate threshold utility
values, for those tactics that require them, thereby deriving optimal utilities
for every environment state. To handle user preference uncertainty, the model
relies on a stochastic search to find user model that best agrees with a given
partial preference profile. Multi-objective optimization and multi-criteria
decision-making methods are applied at negotiation time to generate
Pareto-optimal outcomes thereby increasing the number of successful (win-win)
negotiations. Rigorous experimental evaluations show that the agent employing
our model outperforms the winning agents of the 10th Automated Negotiating
Agents Competition (ANAC'19) in terms of individual as well as social-welfare
utilities.
Related papers
- MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Assistive Large Language Model Agents for Socially-Aware Negotiation Dialogues [47.977032883078664]
We develop assistive agents based on Large Language Models (LLMs)
We simulate business negotiations by letting two LLM-based agents engage in role play.
A third LLM acts as a remediator agent to rewrite utterances violating norms for improving negotiation outcomes.
arXiv Detail & Related papers (2024-01-29T09:07:40Z) - INA: An Integrative Approach for Enhancing Negotiation Strategies with
Reward-Based Dialogue System [22.392304683798866]
We propose a novel negotiation dialogue agent designed for the online marketplace.
We employ a set of novel rewards, specifically tailored for the negotiation task to train our Negotiation Agent.
Our results demonstrate that the proposed approach and reward system significantly enhance the agent's negotiation capabilities.
arXiv Detail & Related papers (2023-10-27T15:31:16Z) - ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate [57.71597869337909]
We build a multi-agent referee team called ChatEval to autonomously discuss and evaluate the quality of generated responses from different models.
Our analysis shows that ChatEval transcends mere textual scoring, offering a human-mimicking evaluation process for reliable assessments.
arXiv Detail & Related papers (2023-08-14T15:13:04Z) - Off-policy evaluation for learning-to-rank via interpolating the
item-position model and the position-based model [83.83064559894989]
A critical need for industrial recommender systems is the ability to evaluate recommendation policies offline, before deploying them to production.
We develop a new estimator that mitigates the problems of the two most popular off-policy estimators for rankings.
In particular, the new estimator, called INTERPOL, addresses the bias of a potentially misspecified position-based model.
arXiv Detail & Related papers (2022-10-15T17:22:30Z) - Targeted Data Acquisition for Evolving Negotiation Agents [6.953246373478702]
Successful negotiators must learn how to balance optimizing for self-interest and cooperation.
Current artificial negotiation agents often heavily depend on the quality of the static datasets they were trained on.
We introduce a targeted data acquisition framework where we guide the exploration of a reinforcement learning agent.
arXiv Detail & Related papers (2021-06-14T19:45:59Z) - Model-based Multi-agent Policy Optimization with Adaptive Opponent-wise
Rollouts [52.844741540236285]
This paper investigates the model-based methods in multi-agent reinforcement learning (MARL)
We propose a novel decentralized model-based MARL method, named Adaptive Opponent-wise Rollout Policy (AORPO)
arXiv Detail & Related papers (2021-05-07T16:20:22Z) - Improving Dialog Systems for Negotiation with Personality Modeling [30.78850714931678]
We introduce a probabilistic formulation to encapsulate the opponent's personality type during both learning and inference.
We test our approach on the CraigslistBargain dataset and show that our method using ToM inference achieves a 20% higher dialog agreement rate.
arXiv Detail & Related papers (2020-10-20T01:46:03Z) - On the model-based stochastic value gradient for continuous
reinforcement learning [50.085645237597056]
We show that simple model-based agents can outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward.
Our findings suggest that model-based policy evaluation deserves closer attention.
arXiv Detail & Related papers (2020-08-28T17:58:29Z) - A Deep Reinforcement Learning Approach to Concurrent Bilateral
Negotiation [6.484413431061962]
We present a novel negotiation model that allows an agent to learn how to negotiate during concurrent bilateral negotiations in unknown and dynamic e-markets.
The agent uses an actor-critic architecture with model-free reinforcement learning to learn a strategy expressed as a deep neural network.
As a result, we can build automated agents for concurrent negotiations that can adapt to different e-market settings without the need to be pre-programmed.
arXiv Detail & Related papers (2020-01-31T12:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.