Federated Reconnaissance: Efficient, Distributed, Class-Incremental
Learning
- URL: http://arxiv.org/abs/2109.00150v1
- Date: Wed, 1 Sep 2021 01:51:30 GMT
- Title: Federated Reconnaissance: Efficient, Distributed, Class-Incremental
Learning
- Authors: Sean M. Hendryx, Dharma Raj KC, Bradley Walls, Clayton T. Morrison
- Abstract summary: We describe a class of learning problems in which distributed clients learn new concepts independently and communicate that knowledge efficiently.
We find that prototypical networks are a strong approach in that they are robust to catastrophic forgetting while incorporating new information efficiently.
- Score: 1.244390243967322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe federated reconnaissance, a class of learning problems in which
distributed clients learn new concepts independently and communicate that
knowledge efficiently. In particular, we propose an evaluation framework and
methodological baseline for a system in which each client is expected to learn
a growing set of classes and communicate knowledge of those classes efficiently
with other clients, such that, after knowledge merging, the clients should be
able to accurately discriminate between classes in the superset of classes
observed by the set of clients. We compare a range of learning algorithms for
this problem and find that prototypical networks are a strong approach in that
they are robust to catastrophic forgetting while incorporating new information
efficiently. Furthermore, we show that the online averaging of prototype
vectors is effective for client model merging and requires only a small amount
of communication overhead, memory, and update time per class with no
gradient-based learning or hyperparameter tuning. Additionally, to put our
results in context, we find that a simple, prototypical network with four
convolutional layers significantly outperforms complex, state of the art
continual learning algorithms, increasing the accuracy by over 22% after
learning 600 Omniglot classes and over 33% after learning 20 mini-ImageNet
classes incrementally. These results have important implications for federated
reconnaissance and continual learning more generally by demonstrating that
communicating feature vectors is an efficient, robust, and effective means for
distributed, continual learning.
Related papers
- FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Reinforcement Learning Based Multi-modal Feature Fusion Network for
Novel Class Discovery [47.28191501836041]
In this paper, we employ a Reinforcement Learning framework to simulate the cognitive processes of humans.
We also deploy a Member-to-Leader Multi-Agent framework to extract and fuse features from multi-modal information.
We demonstrate the performance of our approach in both the 3D and 2D domains by employing the OS-MN40, OS-MN40-Miss, and Cifar10 datasets.
arXiv Detail & Related papers (2023-08-26T07:55:32Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - PeFLL: Personalized Federated Learning by Learning to Learn [16.161876130822396]
We present PeFLL, a new personalized federated learning algorithm that improves over the state-of-the-art in three aspects.
At the core of PeFLL lies a learning-to-learn approach that jointly trains an embedding network and a hypernetwork.
arXiv Detail & Related papers (2023-06-08T19:12:42Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - FedClassAvg: Local Representation Learning for Personalized Federated
Learning on Heterogeneous Neural Networks [21.613436984547917]
We propose a novel personalized federated learning method called federated classifier averaging (FedClassAvg)
FedClassAvg aggregates weights as an agreement on decision boundaries on feature spaces.
We demonstrate it outperforms the current state-of-the-art algorithms on heterogeneous personalized federated learning tasks.
arXiv Detail & Related papers (2022-10-25T08:32:08Z) - Addressing Client Drift in Federated Continual Learning with Adaptive
Optimization [10.303676184878896]
We outline a framework for performing Federated Continual Learning (FCL) by using NetTailor as a candidate continual learning approach.
We show that adaptive federated optimization can reduce the adverse impact of client drift and showcase its effectiveness on CIFAR100, MiniImagenet, and Decathlon benchmarks.
arXiv Detail & Related papers (2022-03-24T20:00:03Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.