Real-Time Decentralized knowledge Transfer at the Edge
- URL: http://arxiv.org/abs/2011.05961v4
- Date: Fri, 1 Oct 2021 16:12:29 GMT
- Title: Real-Time Decentralized knowledge Transfer at the Edge
- Authors: Orpaz Goldstein, Mohammad Kachuee, Derek Shiell, Majid Sarrafzadeh
- Abstract summary: Transferring knowledge in a selective decentralized approach enables models to retain their local insights.
We propose a method based on knowledge distillation for pairwise knowledge transfer pipelines from models trained on non-i.i.d. data.
Our experiments show knowledge transfer using our model outperforms standard methods in a real-time transfer scenario.
- Score: 6.732931634492992
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The proliferation of edge networks creates islands of learning agents working
on local streams of data. Transferring knowledge between these agents in
real-time without exposing private data allows for collaboration to decrease
learning time and increase model confidence. Incorporating knowledge from data
that a local model did not see creates an ability to debias a local model or
add to classification abilities on data never before seen. Transferring
knowledge in a selective decentralized approach enables models to retain their
local insights, allowing for local flavors of a machine learning model. This
approach suits the decentralized architecture of edge networks, as a local edge
node will serve a community of learning agents that will likely encounter
similar data. We propose a method based on knowledge distillation for pairwise
knowledge transfer pipelines from models trained on non-i.i.d. data and compare
it to other popular knowledge transfer methods. Additionally, we test different
scenarios of knowledge transfer network construction and show the practicality
of our approach. Our experiments show knowledge transfer using our model
outperforms standard methods in a real-time transfer scenario.
Related papers
- Proximity-based Self-Federated Learning [1.0066310107046081]
This paper introduces a novel, fully-distributed federated learning strategy called proximity-based self-federated learning.
Unlike traditional algorithms, our approach encourages clients to share and adjust their models with neighbouring nodes based on geographic proximity and model accuracy.
arXiv Detail & Related papers (2024-07-17T08:44:45Z) - MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities [72.68829963458408]
We present MergeNet, which learns to bridge the gap of parameter spaces of heterogeneous models.
The core mechanism of MergeNet lies in the parameter adapter, which operates by querying the source model's low-rank parameters.
MergeNet is learned alongside both models, allowing our framework to dynamically transfer and adapt knowledge relevant to the current stage.
arXiv Detail & Related papers (2024-04-20T08:34:39Z) - KnFu: Effective Knowledge Fusion [5.305607095162403]
Federated Learning (FL) has emerged as a prominent alternative to the traditional centralized learning approach.
The paper proposes Effective Knowledge Fusion (KnFu) algorithm that evaluates knowledge of local models to only fuse semantic neighbors' effective knowledge for each client.
A key conclusion of the work is that in scenarios with large and highly heterogeneous local datasets, local training could be preferable to knowledge fusion-based solutions.
arXiv Detail & Related papers (2024-03-18T15:49:48Z) - Bridged-GNN: Knowledge Bridge Learning for Effective Knowledge Transfer [65.42096702428347]
Graph Neural Networks (GNNs) aggregate information from neighboring nodes.
Knowledge Bridge Learning (KBL) learns a knowledge-enhanced posterior distribution for target domains.
Bridged-GNN includes an Adaptive Knowledge Retrieval module to build Bridged-Graph and a Graph Knowledge Transfer module.
arXiv Detail & Related papers (2023-08-18T12:14:51Z) - FedACK: Federated Adversarial Contrastive Knowledge Distillation for
Cross-Lingual and Cross-Model Social Bot Detection [22.979415040695557]
FedACK is a new adversarial contrastive knowledge distillation framework for social bot detection.
A global generator is used to extract the knowledge of global data distribution and distill it into each client's local model.
Experiments demonstrate that FedACK outperforms the state-of-the-art approaches in terms of accuracy, communication efficiency, and feature space consistency.
arXiv Detail & Related papers (2023-03-10T03:10:08Z) - Change Detection for Local Explainability in Evolving Data Streams [72.4816340552763]
Local feature attribution methods have become a popular technique for post-hoc and model-agnostic explanations.
It is often unclear how local attributions behave in realistic, constantly evolving settings such as streaming and online applications.
We present CDLEEDS, a flexible and model-agnostic framework for detecting local change and concept drift.
arXiv Detail & Related papers (2022-09-06T18:38:34Z) - Certified Robustness in Federated Learning [54.03574895808258]
We study the interplay between federated training, personalization, and certified robustness.
We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models.
arXiv Detail & Related papers (2022-06-06T12:10:53Z) - CDKT-FL: Cross-Device Knowledge Transfer using Proxy Dataset in Federated Learning [27.84845136697669]
We develop a novel knowledge distillation-based approach to study the extent of knowledge transfer between the global model and local models.
We show the proposed method achieves significant speedups and high personalized performance of local models.
arXiv Detail & Related papers (2022-04-04T14:49:19Z) - Data-Free Knowledge Transfer: A Survey [13.335198869928167]
knowledge distillation (KD) and domain adaptation (DA) are proposed and become research highlights.
They both aim to transfer useful information from a well-trained model with original training data.
Recently, the data-free knowledge transfer paradigm has attracted appealing attention.
arXiv Detail & Related papers (2021-12-31T03:39:42Z) - Multi-Branch Deep Radial Basis Function Networks for Facial Emotion
Recognition [80.35852245488043]
We propose a CNN based architecture enhanced with multiple branches formed by radial basis function (RBF) units.
RBF units capture local patterns shared by similar instances using an intermediate representation.
We show it is the incorporation of local information what makes the proposed model competitive.
arXiv Detail & Related papers (2021-09-07T21:05:56Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.