Federated Selective Aggregation for Knowledge Amalgamation
- URL: http://arxiv.org/abs/2207.13309v1
- Date: Wed, 27 Jul 2022 05:36:50 GMT
- Title: Federated Selective Aggregation for Knowledge Amalgamation
- Authors: Donglin Xie, Ruonan Yu, Gongfan Fang, Jie Song, Zunlei Feng, Xinchao
Wang, Li Sun, and Mingli Song
- Abstract summary: The goal of FedSA is to train a student model for a new task with the help of several decentralized teachers.
Our motivation for investigating such a problem setup stems from a recent dilemma of model sharing.
The proposed FedSA offers a solution to this dilemma and makes it one step further since, again, the learned student may specialize in a new task different from all of the teachers.
- Score: 66.94340185846686
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we explore a new knowledge-amalgamation problem, termed
Federated Selective Aggregation (FedSA). The goal of FedSA is to train a
student model for a new task with the help of several decentralized teachers,
whose pre-training tasks and data are different and agnostic. Our motivation
for investigating such a problem setup stems from a recent dilemma of model
sharing. Many researchers or institutes have spent enormous resources on
training large and competent networks. Due to the privacy, security, or
intellectual property issues, they are, however, not able to share their own
pre-trained models, even if they wish to contribute to the community. The
proposed FedSA offers a solution to this dilemma and makes it one step further
since, again, the learned student may specialize in a new task different from
all of the teachers. To this end, we proposed a dedicated strategy for handling
FedSA. Specifically, our student-training process is driven by a novel
saliency-based approach that adaptively selects teachers as the participants
and integrates their representative capabilities into the student. To evaluate
the effectiveness of FedSA, we conduct experiments on both single-task and
multi-task settings. Experimental results demonstrate that FedSA effectively
amalgamates knowledge from decentralized models and achieves competitive
performance to centralized baselines.
Related papers
- Decentralized Online Learning in Task Assignment Games for Mobile
Crowdsensing [55.07662765269297]
A mobile crowdsensing platform (MCSP) sequentially publishes sensing tasks to the available mobile units (MUs) that signal their willingness to participate in a task by sending sensing offers back to the MCSP.
A stable task assignment must address two challenges: the MCSP's and MUs' conflicting goals, and the uncertainty about the MUs' required efforts and preferences.
To overcome these challenges a novel decentralized approach combining matching theory and online learning, called collision-avoidance multi-armed bandit with strategic free sensing (CA-MAB-SFS) is proposed.
arXiv Detail & Related papers (2023-09-19T13:07:15Z) - FedSA: Accelerating Intrusion Detection in Collaborative Environments
with Federated Simulated Annealing [2.7011265453906983]
Federated learning emerges as a solution to collaborative training for an Intrusion Detection System (IDS)
This paper proposes the Federated Simulated Annealing (FedSA) metaheuristic to select the hyper parameters and a subset of participants for each aggregation round in federated learning.
The proposal requires up to 50% fewer aggregation rounds to achieve approximately 97% accuracy in attack detection than the conventional aggregation approach.
arXiv Detail & Related papers (2022-05-23T14:27:56Z) - Secure Distributed Training at Scale [65.7538150168154]
Training in presence of peers requires specialized distributed training algorithms with Byzantine tolerance.
We propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency.
arXiv Detail & Related papers (2021-06-21T17:00:42Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z) - FERN: Fair Team Formation for Mutually Beneficial Collaborative Learning [9.484474204788349]
This work introduces FERN, a fair team formation approach that promotes mutually beneficial peer learning.
We show this problem as a discrete optimization problem to be NPhard and propose a hill-climbing algorithm.
arXiv Detail & Related papers (2020-11-23T18:38:01Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z) - Mutual Information Based Knowledge Transfer Under State-Action Dimension
Mismatch [14.334987432342707]
We propose a new framework for transfer learning where the teacher and the student can have arbitrarily different state- and action-spaces.
To handle this mismatch, we produce embeddings which can systematically extract knowledge from the teacher policy and value networks.
We demonstrate successful transfer learning in situations when the teacher and student have different state- and action-spaces.
arXiv Detail & Related papers (2020-06-12T09:51:17Z) - Human AI interaction loop training: New approach for interactive
reinforcement learning [0.0]
Reinforcement Learning (RL) in various decision-making tasks of machine learning provides effective results with an agent learning from a stand-alone reward function.
RL presents unique challenges with large amounts of environment states and action spaces, as well as in the determination of rewards.
Imitation Learning (IL) offers a promising solution for those challenges using a teacher.
arXiv Detail & Related papers (2020-03-09T15:27:48Z) - Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model
Distillation Approach [55.83558520598304]
We propose a brand new solution to reuse experiences and transfer value functions among multiple students via model distillation.
We also describe how to design an efficient communication protocol to exploit heterogeneous knowledge.
Our proposed framework, namely Learning and Teaching Categorical Reinforcement, shows promising performance on stabilizing and accelerating learning progress.
arXiv Detail & Related papers (2020-02-06T11:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.