A utility-based analysis of equilibria in multi-objective normal form
games
- URL: http://arxiv.org/abs/2001.08177v1
- Date: Fri, 17 Jan 2020 22:27:38 GMT
- Title: A utility-based analysis of equilibria in multi-objective normal form
games
- Authors: Roxana R\u{a}dulescu, Patrick Mannion, Yijie Zhang, Diederik M.
Roijers, and Ann Now\'e
- Abstract summary: We argue that compromises between competing objectives in MOMAS should be analysed on the basis of the utility that these compromises have for the users of a system.
This utility-based approach naturally leads to two different optimisation criteria for agents in a MOMAS.
We show that the choice of optimisation criterion can radically alter the set of equilibria in a MONFG when non-linear utility functions are used.
- Score: 4.632366780742502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multi-objective multi-agent systems (MOMAS), agents explicitly consider
the possible tradeoffs between conflicting objective functions. We argue that
compromises between competing objectives in MOMAS should be analysed on the
basis of the utility that these compromises have for the users of a system,
where an agent's utility function maps their payoff vectors to scalar utility
values. This utility-based approach naturally leads to two different
optimisation criteria for agents in a MOMAS: expected scalarised returns (ESR)
and scalarised expected returns (SER). In this article, we explore the
differences between these two criteria using the framework of multi-objective
normal form games (MONFGs). We demonstrate that the choice of optimisation
criterion (ESR or SER) can radically alter the set of equilibria in a MONFG
when non-linear utility functions are used.
Related papers
- A Unifying Framework for Action-Conditional Self-Predictive Reinforcement Learning [48.59516337905877]
Learning a good representation is a crucial challenge for Reinforcement Learning (RL) agents.
Recent work has developed theoretical insights into these algorithms.
We take a step towards bridging the gap between theory and practice by analyzing an action-conditional self-predictive objective.
arXiv Detail & Related papers (2024-06-04T07:22:12Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Value function interference and greedy action selection in value-based
multi-objective reinforcement learning [1.4206639868377509]
Multi-objective reinforcement learning (MORL) algorithms extend conventional reinforcement learning (RL)
We show that, if the user's utility function maps widely varying vector-values to similar levels of utility, this can lead to interference.
We demonstrate empirically that avoiding the use of random tie-breaking when identifying greedy actions can ameliorate, but not fully overcome, the problems caused by value function interference.
arXiv Detail & Related papers (2024-02-09T09:28:01Z) - Multi-Objective GFlowNets [59.16787189214784]
We study the problem of generating diverse candidates in the context of Multi-Objective Optimization.
In many applications of machine learning such as drug discovery and material design, the goal is to generate candidates which simultaneously optimize a set of potentially conflicting objectives.
We propose Multi-Objective GFlowNets (MOGFNs), a novel method for generating diverse optimal solutions, based on GFlowNets.
arXiv Detail & Related papers (2022-10-23T16:15:36Z) - Multi-Target XGBoostLSS Regression [91.3755431537592]
We present an extension of XGBoostLSS that models multiple targets and their dependencies in a probabilistic regression setting.
Our approach outperforms existing GBMs with respect to runtime and compares well in terms of accuracy.
arXiv Detail & Related papers (2022-10-13T08:26:14Z) - Multi-Objective Coordination Graphs for the Expected Scalarised Returns
with Generative Flow Models [2.7648976108201815]
Key to solving real-world problems is to exploit sparse dependency structures between agents.
In wind farm control a trade-off exists between maximising power and minimising stress on the systems components.
We model such sparse dependencies between agents as a multi-objective coordination graph (MO-CoG)
arXiv Detail & Related papers (2022-07-01T12:10:15Z) - Mono-surrogate vs Multi-surrogate in Multi-objective Bayesian
Optimisation [0.0]
We build a surrogate model for each objective function and show that the scalarising function distribution is not Gaussian.
Results and comparison with existing approaches on standard benchmark and real-world optimisation problems show the potential of the multi-surrogate approach.
arXiv Detail & Related papers (2022-05-02T09:25:04Z) - R-MBO: A Multi-surrogate Approach for Preference Incorporation in
Multi-objective Bayesian Optimisation [0.0]
We present an a-priori multi-surrogate approach to incorporate the desirable objective function values as the preferences of a decision-maker in multi-objective BO.
The results and comparison with the existing mono-surrogate approach on benchmark and real-world optimisation problems show the potential of the proposed approach.
arXiv Detail & Related papers (2022-04-27T19:58:26Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Opponent Learning Awareness and Modelling in Multi-Objective Normal Form
Games [5.0238343960165155]
It is essential for an agent to learn about the behaviour of other agents in the system.
We present the first study of the effects of such opponent modelling on multi-objective multi-agent interactions with non-linear utilities.
arXiv Detail & Related papers (2020-11-14T12:35:32Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.