A Mechanism for Mutual Fairness in Cooperative Games with Replicable Resources -- Extended Version
- URL: http://arxiv.org/abs/2508.13960v1
- Date: Tue, 19 Aug 2025 15:53:34 GMT
- Title: A Mechanism for Mutual Fairness in Cooperative Games with Replicable Resources -- Extended Version
- Authors: Björn Filter, Ralf Möller, Özgür Lütfü Özçep,
- Abstract summary: Latest developments in AI focus on agentic systems where artificial and human agents cooperate to realize global goals.<n>A major challenge in designing such systems is to guarantee safety and alignment with human values.<n>Cooperative game theory offers useful abstractions of cooperating agents via value functions.
- Score: 2.709511652792003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The latest developments in AI focus on agentic systems where artificial and human agents cooperate to realize global goals. An example is collaborative learning, which aims to train a global model based on data from individual agents. A major challenge in designing such systems is to guarantee safety and alignment with human values, particularly a fair distribution of rewards upon achieving the global goal. Cooperative game theory offers useful abstractions of cooperating agents via value functions, which assign value to each coalition, and via reward functions. With these, the idea of fair allocation can be formalized by specifying fairness axioms and designing concrete mechanisms. Classical cooperative game theory, exemplified by the Shapley value, does not fully capture scenarios like collaborative learning, as it assumes nonreplicable resources, whereas data and models can be replicated. Infinite replicability requires a generalized notion of fairness, formalized through new axioms and mechanisms. These must address imbalances in reciprocal benefits among participants, which can lead to strategic exploitation and unfair allocations. The main contribution of this paper is a mechanism and a proof that it fulfills the property of mutual fairness, formalized by the Balanced Reciprocity Axiom. It ensures that, for every pair of players, each benefits equally from the participation of the other.
Related papers
- Altruism and Fair Objective in Mixed-Motive Markov games [0.0]
In game theory, social dilemmas entail this dichotomy between individual interest and collective outcome.<n>This paper proposes a novel framework to foster fairer cooperation by replacing the standard utilitarian objective with Proportional Fairness.
arXiv Detail & Related papers (2026-02-09T08:40:52Z) - Toward a Sustainable Federated Learning Ecosystem: A Practical Least Core Mechanism for Payoff Allocation [71.86087908416255]
We introduce a payoff allocation framework based on the least core (LC) concept.<n>Unlike traditional methods, the LC prioritizes the cohesion of the federation by minimizing the maximum dissatisfaction.<n>Case studies in federated intrusion detection demonstrate that our mechanism correctly identifies pivotal contributors and strategic alliances.
arXiv Detail & Related papers (2026-02-03T11:10:50Z) - Embedded Universal Predictive Intelligence: a coherent framework for multi-agent learning [57.23345786304694]
We introduce a framework for prospective learning and embedded agency centered on self-prediction.<n>We show that in multi-agent settings, self-prediction enables agents to reason about others running similar algorithms.<n>We extend the theory of AIXI, and study universally intelligent embedded agents which start from a Solomonoff prior.
arXiv Detail & Related papers (2025-11-27T08:46:48Z) - Social World Model-Augmented Mechanism Design Policy Learning [58.739456918502704]
We introduce SWM-AP (Social World Model-Augmented Mechanism Design Policy Learning), which learns a social world model hierarchically to enhance mechanism design.<n>We show that SWM-AP outperforms established model-based and model-free RL baselines in cumulative rewards and sample efficiency.
arXiv Detail & Related papers (2025-10-22T06:01:21Z) - Learning to Balance Altruism and Self-interest Based on Empathy in Mixed-Motive Games [47.8980880888222]
Multi-agent scenarios often involve mixed motives, demanding altruistic agents capable of self-protection against potential exploitation.<n>We propose LASE Learning to balance Altruism and Self-interest based on Empathy.<n>LASE allocates a portion of its rewards to co-players as gifts, with this allocation adapting dynamically based on the social relationship.
arXiv Detail & Related papers (2024-10-10T12:30:56Z) - ALYMPICS: LLM Agents Meet Game Theory -- Exploring Strategic
Decision-Making with AI Agents [77.34720446306419]
Alympics is a systematic simulation framework utilizing Large Language Model (LLM) agents for game theory research.
Alympics creates a versatile platform for studying complex game theory problems.
arXiv Detail & Related papers (2023-11-06T16:03:46Z) - Cooperation Dynamics in Multi-Agent Systems: Exploring Game-Theoretic Scenarios with Mean-Field Equilibria [0.0]
This paper investigates strategies to invoke cooperation in game-theoretic scenarios, namely the Iterated Prisoner's Dilemma.
Existing cooperative strategies are analyzed for their effectiveness in promoting group-oriented behavior in repeated games.
The study extends to scenarios with exponentially growing agent populations.
arXiv Detail & Related papers (2023-09-28T08:57:01Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Distributional Reinforcement Learning for Multi-Dimensional Reward
Functions [91.88969237680669]
We introduce Multi-Dimensional Distributional DQN (MD3QN) to model the joint return distribution from multiple reward sources.
As a by-product of joint distribution modeling, MD3QN can capture the randomness in returns for each source of reward.
In experiments, our method accurately models the joint return distribution in environments with richly correlated reward functions.
arXiv Detail & Related papers (2021-10-26T11:24:23Z) - Cooperation and Reputation Dynamics with Reinforcement Learning [6.219565750197311]
We show how reputations can be used as a way to establish trust and cooperation.
We propose two mechanisms to alleviate convergence to undesirable equilibria.
We show how our results relate to the literature in Evolutionary Game Theory.
arXiv Detail & Related papers (2021-02-15T12:48:56Z) - Decentralized Reinforcement Learning: Global Decision-Making via Local
Economic Transactions [80.49176924360499]
We establish a framework for directing a society of simple, specialized, self-interested agents to solve sequential decision problems.
We derive a class of decentralized reinforcement learning algorithms.
We demonstrate the potential advantages of a society's inherent modular structure for more efficient transfer learning.
arXiv Detail & Related papers (2020-07-05T16:41:09Z) - Calibration of Shared Equilibria in General Sum Partially Observable
Markov Games [15.572157454411533]
We consider a general sum partially observable Markov game where agents of different types share a single policy network.
This paper aims at i) formally understanding equilibria reached by such agents, and ii) matching emergent phenomena of such equilibria to real-world targets.
arXiv Detail & Related papers (2020-06-23T15:14:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.