Multi-Value Alignment in Normative Multi-Agent System: Evolutionary
Optimisation Approach
- URL: http://arxiv.org/abs/2305.07366v1
- Date: Fri, 12 May 2023 10:30:20 GMT
- Title: Multi-Value Alignment in Normative Multi-Agent System: Evolutionary
Optimisation Approach
- Authors: Maha Riad, Vinicius Renan de Carvalho and Fatemeh Golpayegani
- Abstract summary: This research proposes a multi-value promotion model that uses multi-objective evolutionary algorithms to produce the optimum parametric set of norms.
Several evolutionary algorithms were used to find a set of optimised norm parameters considering two toy tax scenarios with two and five values are considered.
- Score: 1.160208922584163
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Value-alignment in normative multi-agent systems is used to promote a certain
value and to ensure the consistent behavior of agents in autonomous intelligent
systems with human values. However, the current literature is limited to
incorporation of effective norms for single value alignment with no
consideration of agents' heterogeneity and the requirement of simultaneous
promotion and alignment of multiple values. This research proposes a
multi-value promotion model that uses multi-objective evolutionary algorithms
to produce the optimum parametric set of norms that is aligned with multiple
simultaneous values of heterogeneous agents and the system. To understand
various aspects of this complex problem, several evolutionary algorithms were
used to find a set of optimised norm parameters considering two toy tax
scenarios with two and five values are considered. The results are analysed
from different perspectives to show the impact of a selected evolutionary
algorithm on the solution, and the importance of understanding the relation
between values when prioritising them.
Related papers
- Preference-Conditioned Gradient Variations for Multi-Objective Quality-Diversity [7.799824794686343]
We introduce a new Multi-Objective Quality-Diversity algorithm with preference-conditioned policy-gradient mutations.
Our method achieves a smoother set of trade-offs, as measured by newly-proposed sparsity-based metrics.
This performance comes at a lower computational storage cost compared to previous methods.
arXiv Detail & Related papers (2024-11-19T11:50:03Z) - Comparative Analysis of Indicators for Multiobjective Diversity Optimization [0.2144088660722956]
We discuss different diversity indicators from the perspective of indicator-based evolutionary algorithms (IBEA) with multiple objectives.
We examine theoretical, computational, and practical properties of these indicators, such as monotonicity in species.
We present new theorems -- including a proof of the NP-hardness of the Riesz s-Energy Subset Selection Problem.
arXiv Detail & Related papers (2024-10-24T16:40:36Z) - Multi-Objective GFlowNets [59.16787189214784]
We study the problem of generating diverse candidates in the context of Multi-Objective Optimization.
In many applications of machine learning such as drug discovery and material design, the goal is to generate candidates which simultaneously optimize a set of potentially conflicting objectives.
We propose Multi-Objective GFlowNets (MOGFNs), a novel method for generating diverse optimal solutions, based on GFlowNets.
arXiv Detail & Related papers (2022-10-23T16:15:36Z) - Faster Last-iterate Convergence of Policy Optimization in Zero-Sum
Markov Games [63.60117916422867]
This paper focuses on the most basic setting of competitive multi-agent RL, namely two-player zero-sum Markov games.
We propose a single-loop policy optimization method with symmetric updates from both agents, where the policy is updated via the entropy-regularized optimistic multiplicative weights update (OMWU) method.
Our convergence results improve upon the best known complexities, and lead to a better understanding of policy optimization in competitive Markov games.
arXiv Detail & Related papers (2022-10-03T16:05:43Z) - Multi-Objective Quality Diversity Optimization [2.4608515808275455]
We propose an extension of the MAP-Elites algorithm in the multi-objective setting: Multi-Objective MAP-Elites (MOME)
Namely, it combines the diversity inherited from the MAP-Elites grid algorithm with the strength of multi-objective optimizations.
We evaluate our method on several tasks, from standard optimization problems to robotics simulations.
arXiv Detail & Related papers (2022-02-07T10:48:28Z) - Choosing the Best of Both Worlds: Diverse and Novel Recommendations
through Multi-Objective Reinforcement Learning [68.45370492516531]
We introduce Scalarized Multi-Objective Reinforcement Learning (SMORL) for the Recommender Systems (RS) setting.
SMORL agent augments standard recommendation models with additional RL layers that enforce it to simultaneously satisfy three principal objectives: accuracy, diversity, and novelty of recommendations.
Our experimental results on two real-world datasets reveal a substantial increase in aggregate diversity, a moderate increase in accuracy, reduced repetitiveness of recommendations, and demonstrate the importance of reinforcing diversity and novelty as complementary objectives.
arXiv Detail & Related papers (2021-10-28T13:22:45Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Permutation Invariant Policy Optimization for Mean-Field Multi-Agent
Reinforcement Learning: A Principled Approach [128.62787284435007]
We propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture.
We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence.
In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors.
arXiv Detail & Related papers (2021-05-18T04:35:41Z) - Niching Diversity Estimation for Multi-modal Multi-objective
Optimization [9.584279193016522]
Niching is an important and widely used technique in evolutionary multi-objective optimization.
In MMOPs, a solution in the objective space may have multiple inverse images in the decision space, which are termed as equivalent solutions.
A general niching mechanism is proposed to make standard diversity estimators more efficient when handling MMOPs.
arXiv Detail & Related papers (2021-01-31T05:23:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.