One for One, or All for All: Equilibria and Optimality of Collaboration
in Federated Learning
- URL: http://arxiv.org/abs/2103.03228v1
- Date: Thu, 4 Mar 2021 18:53:17 GMT
- Title: One for One, or All for All: Equilibria and Optimality of Collaboration
in Federated Learning
- Authors: Avrim Blum, Nika Haghtalab, Richard Lanas Phillips, Han Shao
- Abstract summary: Inspired by game theoretic notions, this paper introduces a framework for incentive-aware learning and data sharing in federated learning.
Our stable and envy-free equilibria capture notions of collaboration in the presence of agents interested in meeting their learning objectives.
- Score: 24.196114621742705
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In recent years, federated learning has been embraced as an approach for
bringing about collaboration across large populations of learning agents.
However, little is known about how collaboration protocols should take agents'
incentives into account when allocating individual resources for communal
learning in order to maintain such collaborations. Inspired by game theoretic
notions, this paper introduces a framework for incentive-aware learning and
data sharing in federated learning. Our stable and envy-free equilibria capture
notions of collaboration in the presence of agents interested in meeting their
learning objectives while keeping their own sample collection burden low. For
example, in an envy-free equilibrium, no agent would wish to swap their
sampling burden with any other agent and in a stable equilibrium, no agent
would wish to unilaterally reduce their sampling burden.
In addition to formalizing this framework, our contributions include
characterizing the structural properties of such equilibria, proving when they
exist, and showing how they can be computed. Furthermore, we compare the sample
complexity of incentive-aware collaboration with that of optimal collaboration
when one ignores agents' incentives.
Related papers
- Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - Learning to Balance Altruism and Self-interest Based on Empathy in Mixed-Motive Games [47.8980880888222]
Multi-agent scenarios often involve mixed motives, demanding altruistic agents capable of self-protection against potential exploitation.
We propose LASE Learning to balance Altruism and Self-interest based on Empathy.
LASE allocates a portion of its rewards to co-players as gifts, with this allocation adapting dynamically based on the social relationship.
arXiv Detail & Related papers (2024-10-10T12:30:56Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Cooperation Dynamics in Multi-Agent Systems: Exploring Game-Theoretic Scenarios with Mean-Field Equilibria [0.0]
This paper investigates strategies to invoke cooperation in game-theoretic scenarios, namely the Iterated Prisoner's Dilemma.
Existing cooperative strategies are analyzed for their effectiveness in promoting group-oriented behavior in repeated games.
The study extends to scenarios with exponentially growing agent populations.
arXiv Detail & Related papers (2023-09-28T08:57:01Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Collaborative Learning via Prediction Consensus [38.89001892487472]
We consider a collaborative learning setting where the goal of each agent is to improve their own model by leveraging the expertise of collaborators.
We propose a distillation-based method leveraging shared unlabeled auxiliary data, which is pseudo-labeled by the collective.
We demonstrate empirically that our collaboration scheme is able to significantly boost the performance of individual models.
arXiv Detail & Related papers (2023-05-29T14:12:03Z) - Incentivizing Honesty among Competitors in Collaborative Learning and
Optimization [5.4619385369457225]
Collaborative learning techniques have the potential to enable machine learning models that are superior to models trained on a single entity's data.
In many cases, potential participants in such collaborative schemes are competitors on a downstream task.
arXiv Detail & Related papers (2023-05-25T17:28:41Z) - Adaptive Value Decomposition with Greedy Marginal Contribution
Computation for Cooperative Multi-Agent Reinforcement Learning [48.41925886860991]
Real-world cooperation often requires intensive coordination among agents simultaneously.
Traditional methods that learn the value function as a monotonic mixing of per-agent utilities cannot solve the tasks with non-monotonic returns.
We propose a novel explicit credit assignment method to address the non-monotonic problem.
arXiv Detail & Related papers (2023-02-14T07:23:59Z) - Game-Theoretical Perspectives on Active Equilibria: A Preferred Solution
Concept over Nash Equilibria [61.093297204685264]
An effective approach in multiagent reinforcement learning is to consider the learning process of agents and influence their future policies.
This new solution concept is general such that standard solution concepts, such as a Nash equilibrium, are special cases of active equilibria.
We analyze active equilibria from a game-theoretic perspective by closely studying examples where Nash equilibria are known.
arXiv Detail & Related papers (2022-10-28T14:45:39Z) - DM$^2$: Distributed Multi-Agent Reinforcement Learning for Distribution
Matching [43.58408474941208]
This paper studies the problem of distributed multi-agent learning without resorting to explicit coordination schemes.
Each individual agent matches a target distribution of concurrently sampled trajectories from a joint expert policy.
Experimental validation on the StarCraft domain shows that combining the reward for distribution matching with the environment reward allows agents to outperform a fully distributed baseline.
arXiv Detail & Related papers (2022-06-01T04:57:50Z) - Cooperation and Reputation Dynamics with Reinforcement Learning [6.219565750197311]
We show how reputations can be used as a way to establish trust and cooperation.
We propose two mechanisms to alleviate convergence to undesirable equilibria.
We show how our results relate to the literature in Evolutionary Game Theory.
arXiv Detail & Related papers (2021-02-15T12:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.