Altruism and Fair Objective in Mixed-Motive Markov games
- URL: http://arxiv.org/abs/2602.08389v1
- Date: Mon, 09 Feb 2026 08:40:52 GMT
- Title: Altruism and Fair Objective in Mixed-Motive Markov games
- Authors: Yao-hua Franck Xu, Tayeb Lemlouma, Arnaud Braud, Jean-Marie Bonnin,
- Abstract summary: In game theory, social dilemmas entail this dichotomy between individual interest and collective outcome.<n>This paper proposes a novel framework to foster fairer cooperation by replacing the standard utilitarian objective with Proportional Fairness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cooperation is fundamental for society's viability, as it enables the emergence of structure within heterogeneous groups that seek collective well-being. However, individuals are inclined to defect in order to benefit from the group's cooperation without contributing the associated costs, thus leading to unfair situations. In game theory, social dilemmas entail this dichotomy between individual interest and collective outcome. The most dominant approach to multi-agent cooperation is the utilitarian welfare which can produce efficient highly inequitable outcomes. This paper proposes a novel framework to foster fairer cooperation by replacing the standard utilitarian objective with Proportional Fairness. We introduce a fair altruistic utility for each agent, defined on the individual log-payoff space and derive the analytical conditions required to ensure cooperation in classic social dilemmas. We then extend this framework to sequential settings by defining a Fair Markov Game and deriving novel fair Actor-Critic algorithms to learn fair policies. Finally, we evaluate our method in various social dilemma environments.
Related papers
- Toward a Sustainable Federated Learning Ecosystem: A Practical Least Core Mechanism for Payoff Allocation [71.86087908416255]
We introduce a payoff allocation framework based on the least core (LC) concept.<n>Unlike traditional methods, the LC prioritizes the cohesion of the federation by minimizing the maximum dissatisfaction.<n>Case studies in federated intrusion detection demonstrate that our mechanism correctly identifies pivotal contributors and strategic alliances.
arXiv Detail & Related papers (2026-02-03T11:10:50Z) - Social welfare optimisation in well-mixed and structured populations [6.45507185761727]
We show that achieving maximal social welfare is not guaranteed at the minimal incentive cost required to drive agents to a desired cooperative state.<n>Our results reveal a significant gap in the per-individual incentive cost between optimising for pure cost efficiency or cooperation frequency and optimising for maximal social welfare.<n>Overall, our findings indicate that incentive design, policy, and benchmarking in multi-agent systems and human societies should prioritise welfare-centric objectives over proxy targets of cost or cooperation frequency.
arXiv Detail & Related papers (2025-12-08T11:27:43Z) - A Mechanism for Mutual Fairness in Cooperative Games with Replicable Resources -- Extended Version [2.709511652792003]
Latest developments in AI focus on agentic systems where artificial and human agents cooperate to realize global goals.<n>A major challenge in designing such systems is to guarantee safety and alignment with human values.<n>Cooperative game theory offers useful abstractions of cooperating agents via value functions.
arXiv Detail & Related papers (2025-08-19T15:53:34Z) - Achieving Collective Welfare in Multi-Agent Reinforcement Learning via Suggestion Sharing [12.167248367980449]
Conflict between self-interest and collective well-being often obstructs efforts to achieve shared welfare.<n>We propose a novel multi-agent reinforcement learning (MARL) method to address this issue.<n>Unlike traditional cooperative MARL solutions that involve sharing rewards, values, and policies, we propose a novel MARL approach where agents exchange action suggestions.
arXiv Detail & Related papers (2024-12-16T19:44:44Z) - Learning to Balance Altruism and Self-interest Based on Empathy in Mixed-Motive Games [47.8980880888222]
Multi-agent scenarios often involve mixed motives, demanding altruistic agents capable of self-protection against potential exploitation.<n>We propose LASE Learning to balance Altruism and Self-interest based on Empathy.<n>LASE allocates a portion of its rewards to co-players as gifts, with this allocation adapting dynamically based on the social relationship.
arXiv Detail & Related papers (2024-10-10T12:30:56Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Learning Roles with Emergent Social Value Orientations [49.16026283952117]
This paper introduces the typical "division of labor or roles" mechanism in human society.
We provide a promising solution for intertemporal social dilemmas (ISD) with social value orientations (SVO)
A novel learning framework, called Learning Roles with Emergent SVOs (RESVO), is proposed to transform the learning of roles into the social value orientation emergence.
arXiv Detail & Related papers (2023-01-31T17:54:09Z) - Normative Disagreement as a Challenge for Cooperative AI [56.34005280792013]
We argue that typical cooperation-inducing learning algorithms fail to cooperate in bargaining problems.
We develop a class of norm-adaptive policies and show in experiments that these significantly increase cooperation.
arXiv Detail & Related papers (2021-11-27T11:37:42Z) - Balancing Rational and Other-Regarding Preferences in
Cooperative-Competitive Environments [4.705291741591329]
Mixed environments are notorious for the conflicts of selfish and social interests.
We propose BAROCCO to balance individual and social incentives.
Our meta-algorithm is compatible with both Q-learning and Actor-Critic frameworks.
arXiv Detail & Related papers (2021-02-24T14:35:32Z) - Decentralized Reinforcement Learning: Global Decision-Making via Local
Economic Transactions [80.49176924360499]
We establish a framework for directing a society of simple, specialized, self-interested agents to solve sequential decision problems.
We derive a class of decentralized reinforcement learning algorithms.
We demonstrate the potential advantages of a society's inherent modular structure for more efficient transfer learning.
arXiv Detail & Related papers (2020-07-05T16:41:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.