A Framework for Incentivized Collaborative Learning
- URL: http://arxiv.org/abs/2305.17052v1
- Date: Fri, 26 May 2023 16:00:59 GMT
- Title: A Framework for Incentivized Collaborative Learning
- Authors: Xinran Wang, Qi Le, Ahmad Faraz Khan, Jie Ding, Ali Anwar
- Abstract summary: We propose ICL, a general framework for incentivized collaborative learning.
We show the broad applicability of ICL to specific cases in federated learning, assisted learning, and multi-armed bandit.
- Score: 15.44652093599549
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborations among various entities, such as companies, research labs, AI
agents, and edge devices, have become increasingly crucial for achieving
machine learning tasks that cannot be accomplished by a single entity alone.
This is likely due to factors such as security constraints, privacy concerns,
and limitations in computation resources. As a result, collaborative learning
(CL) research has been gaining momentum. However, a significant challenge in
practical applications of CL is how to effectively incentivize multiple
entities to collaborate before any collaboration occurs. In this study, we
propose ICL, a general framework for incentivized collaborative learning, and
provide insights into the critical issue of when and why incentives can improve
collaboration performance. Furthermore, we show the broad applicability of ICL
to specific cases in federated learning, assisted learning, and multi-armed
bandit with both theory and experimental results.
Related papers
- TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and Competition [61.91764883512776]
We introduce an innovative PEFT method, TeamLoRA, consisting of a collaboration and competition module for experts.
By doing so, TeamLoRA connects the experts as a "Team" with internal collaboration and competition, enabling a faster and more accurate PEFT paradigm for multi-task learning.
arXiv Detail & Related papers (2024-08-19T09:58:53Z) - Collaborative Active Learning in Conditional Trust Environment [1.3846014191157405]
We investigate collaborative active learning, a paradigm in which multiple collaborators explore a new domain by leveraging their combined machine learning capabilities without disclosing their existing data and models.
This collaboration offers several advantages: (a) it addresses privacy and security concerns by eliminating the need for direct model and data disclosure; (b) it enables the use of different data sources and insights without direct data exchange; and (c) it promotes cost-effectiveness and resource efficiency through shared labeling costs.
arXiv Detail & Related papers (2024-03-27T10:40:27Z) - A Review of Cooperation in Multi-agent Learning [5.334450724000142]
Cooperation in multi-agent learning (MAL) is a topic at the intersection of numerous disciplines.
This paper provides an overview of the fundamental concepts, problem settings and algorithms of multi-agent learning.
arXiv Detail & Related papers (2023-12-08T16:42:15Z) - Beyond Isolation: Multi-Agent Synergy for Improving Knowledge Graph
Construction [10.1305370182537]
This paper introduces a novel framework, CooperKGC, for knowledge graph construction.
CooperKGC establishes a collaborative processing network, assembling a KGC collaboration team capable of concurrently addressing entity, relation, and event extraction tasks.
Our experiments unequivocally demonstrate that fostering collaboration and information interaction among diverse agents within CooperKGC yields superior results compared to individual cognitive processes operating in isolation.
arXiv Detail & Related papers (2023-12-05T07:27:08Z) - Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration [83.4031923134958]
Corex is a suite of novel general-purpose strategies that transform Large Language Models into autonomous agents.
Inspired by human behaviors, Corex is constituted by diverse collaboration paradigms including Debate, Review, and Retrieve modes.
We demonstrate that orchestrating multiple LLMs to work in concert yields substantially better performance compared to existing methods.
arXiv Detail & Related papers (2023-09-30T07:11:39Z) - NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition [63.90327120065928]
We propose a Nested Collaborative Learning (NCL++) which tackles the long-tailed learning problem by a collaborative learning.
To achieve the collaborative learning in long-tailed learning, the balanced online distillation is proposed.
In order to improve the meticulous distinguishing ability on the confusing categories, we further propose a Hard Category Mining.
arXiv Detail & Related papers (2023-06-29T06:10:40Z) - Exploring Interactions and Regulations in Collaborative Learning: An
Interdisciplinary Multimodal Dataset [40.193998859310156]
This paper introduces a new multimodal dataset with cognitive and emotional triggers to explore how regulations affect interactions during the collaborative process.
A learning task with intentional interventions is designed and assigned to high school students aged 15 years old.
Analysis of annotated emotions, body gestures, and their interactions indicates that our dataset with designed treatments could effectively examine moments of regulation in collaborative learning.
arXiv Detail & Related papers (2022-10-11T12:56:36Z) - Distributed Deep Learning in Open Collaborations [49.240611132653456]
We propose a novel algorithmic framework designed specifically for collaborative training.
We demonstrate the effectiveness of our approach for SwAV and ALBERT pretraining in realistic conditions and achieve performance comparable to traditional setups at a fraction of the cost.
arXiv Detail & Related papers (2021-06-18T16:23:13Z) - Practical One-Shot Federated Learning for Cross-Silo Setting [114.76232507580067]
One-shot federated learning is a promising approach to make federated learning applicable in cross-silo setting.
We propose a practical one-shot federated learning algorithm named FedKT.
By utilizing the knowledge transfer technique, FedKT can be applied to any classification models and can flexibly achieve differential privacy guarantees.
arXiv Detail & Related papers (2020-10-02T14:09:10Z) - Joint Contrastive Learning with Infinite Possibilities [114.45811348666898]
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.
We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL)
arXiv Detail & Related papers (2020-09-30T16:24:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.