Incentivizing Honesty among Competitors in Collaborative Learning and
Optimization
- URL: http://arxiv.org/abs/2305.16272v3
- Date: Mon, 30 Oct 2023 09:22:23 GMT
- Title: Incentivizing Honesty among Competitors in Collaborative Learning and
Optimization
- Authors: Florian E. Dorner, Nikola Konstantinov, Georgi Pashaliev, Martin
Vechev
- Abstract summary: Collaborative learning techniques have the potential to enable machine learning models that are superior to models trained on a single entity's data.
In many cases, potential participants in such collaborative schemes are competitors on a downstream task.
- Score: 5.4619385369457225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative learning techniques have the potential to enable training
machine learning models that are superior to models trained on a single
entity's data. However, in many cases, potential participants in such
collaborative schemes are competitors on a downstream task, such as firms that
each aim to attract customers by providing the best recommendations. This can
incentivize dishonest updates that damage other participants' models,
potentially undermining the benefits of collaboration. In this work, we
formulate a game that models such interactions and study two learning tasks
within this framework: single-round mean estimation and multi-round SGD on
strongly-convex objectives. For a natural class of player actions, we show that
rational clients are incentivized to strongly manipulate their updates,
preventing learning. We then propose mechanisms that incentivize honest
communication and ensure learning quality comparable to full cooperation.
Lastly, we empirically demonstrate the effectiveness of our incentive scheme on
a standard non-convex federated learning benchmark. Our work shows that
explicitly modeling the incentives and actions of dishonest clients, rather
than assuming them malicious, can enable strong robustness guarantees for
collaborative learning.
Related papers
- Collaborative Active Learning in Conditional Trust Environment [1.3846014191157405]
We investigate collaborative active learning, a paradigm in which multiple collaborators explore a new domain by leveraging their combined machine learning capabilities without disclosing their existing data and models.
This collaboration offers several advantages: (a) it addresses privacy and security concerns by eliminating the need for direct model and data disclosure; (b) it enables the use of different data sources and insights without direct data exchange; and (c) it promotes cost-effectiveness and resource efficiency through shared labeling costs.
arXiv Detail & Related papers (2024-03-27T10:40:27Z) - On the Effect of Defections in Federated Learning and How to Prevent
Them [20.305263691102727]
Federated learning is a machine learning protocol that enables a large population of agents to collaborate over multiple rounds to produce a single consensus model.
This work demonstrates the impact of such defections on the final model's robustness and ability to generalize.
We introduce a novel optimization algorithm with theoretical guarantees to prevent defections while ensuring convergence to demonstrate an effective solution for all participating agents.
arXiv Detail & Related papers (2023-11-28T03:34:22Z) - Incentivized Communication for Federated Bandits [67.4682056391551]
We introduce an incentivized communication problem for federated bandits, where the server shall motivate clients to share data by providing incentives.
We propose the first incentivized communication protocol, namely, Inc-FedUCB, that achieves near-optimal regret with provable communication and incentive cost guarantees.
arXiv Detail & Related papers (2023-09-21T00:59:20Z) - Collaborative Learning via Prediction Consensus [38.89001892487472]
We consider a collaborative learning setting where the goal of each agent is to improve their own model by leveraging the expertise of collaborators.
We propose a distillation-based method leveraging shared unlabeled auxiliary data, which is pseudo-labeled by the collective.
We demonstrate empirically that our collaboration scheme is able to significantly boost the performance of individual models.
arXiv Detail & Related papers (2023-05-29T14:12:03Z) - Incentivizing Federated Learning [2.420324724613074]
This paper presents an incentive mechanism that encourages clients to contribute as much data as they can obtain.
Unlike previous incentive mechanisms, our approach does not monetize data.
We theoretically prove that clients will use as much data as they can possibly possess to participate in federated learning under certain conditions.
arXiv Detail & Related papers (2022-05-22T23:02:43Z) - On the benefits of knowledge distillation for adversarial robustness [53.41196727255314]
We show that knowledge distillation can be used directly to boost the performance of state-of-the-art models in adversarial robustness.
We present Adversarial Knowledge Distillation (AKD), a new framework to improve a model's robust performance.
arXiv Detail & Related papers (2022-03-14T15:02:13Z) - Mutual Adversarial Training: Learning together is better than going
alone [82.78852509965547]
We study how interactions among models affect robustness via knowledge distillation.
We propose mutual adversarial training (MAT) in which multiple models are trained together.
MAT can effectively improve model robustness and outperform state-of-the-art methods under white-box attacks.
arXiv Detail & Related papers (2021-12-09T15:59:42Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - FERN: Fair Team Formation for Mutually Beneficial Collaborative Learning [9.484474204788349]
This work introduces FERN, a fair team formation approach that promotes mutually beneficial peer learning.
We show this problem as a discrete optimization problem to be NPhard and propose a hill-climbing algorithm.
arXiv Detail & Related papers (2020-11-23T18:38:01Z) - Peer Collaborative Learning for Online Knowledge Distillation [69.29602103582782]
Peer Collaborative Learning method integrates online ensembling and network collaboration into a unified framework.
Experiments on CIFAR-10, CIFAR-100 and ImageNet show that the proposed method significantly improves the generalisation of various backbone networks.
arXiv Detail & Related papers (2020-06-07T13:21:52Z) - Dual Policy Distillation [58.43610940026261]
Policy distillation, which transfers a teacher policy to a student policy, has achieved great success in challenging tasks of deep reinforcement learning.
In this work, we introduce dual policy distillation(DPD), a student-student framework in which two learners operate on the same environment to explore different perspectives of the environment.
The key challenge in developing this dual learning framework is to identify the beneficial knowledge from the peer learner for contemporary learning-based reinforcement learning algorithms.
arXiv Detail & Related papers (2020-06-07T06:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.