ProtoFL: Unsupervised Federated Learning via Prototypical Distillation
- URL: http://arxiv.org/abs/2307.12450v2
- Date: Tue, 8 Aug 2023 01:42:17 GMT
- Title: ProtoFL: Unsupervised Federated Learning via Prototypical Distillation
- Authors: Hansol Kim, Youngjun Kwak, Minyoung Jung, Jinho Shin, Youngsung Kim,
Changick Kim
- Abstract summary: Federated learning is a promising approach for enhancing data privacy preservation.
We propose 'ProtoFL', Prototypical Representation Distillation based unsupervised Federated Learning.
We introduce a local one-class classifier based on normalizing flows to improve performance with limited data.
- Score: 24.394455010267617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a promising approach for enhancing data privacy
preservation, particularly for authentication systems. However, limited round
communications, scarce representation, and scalability pose significant
challenges to its deployment, hindering its full potential. In this paper, we
propose 'ProtoFL', Prototypical Representation Distillation based unsupervised
Federated Learning to enhance the representation power of a global model and
reduce round communication costs. Additionally, we introduce a local one-class
classifier based on normalizing flows to improve performance with limited data.
Our study represents the first investigation of using FL to improve one-class
classification performance. We conduct extensive experiments on five widely
used benchmarks, namely MNIST, CIFAR-10, CIFAR-100, ImageNet-30, and
Keystroke-Dynamics, to demonstrate the superior performance of our proposed
framework over previous methods in the literature.
Related papers
- Federated Class-Incremental Learning with Hierarchical Generative Prototypes [10.532838477096055]
Federated Learning (FL) aims at unburdening the training of deep models by distributing computation across multiple devices (clients)
Our proposal constrains both biases in the last layer by efficiently finetuning a pre-trained backbone using learnable prompts.
Our method significantly improves the current State Of The Art, providing an average increase of +7.8% in accuracy.
arXiv Detail & Related papers (2024-06-04T16:12:27Z) - ILLUMINER: Instruction-tuned Large Language Models as Few-shot Intent Classifier and Slot Filler [1.9015367254988451]
This study evaluates instruction-tuned models (Instruct-LLMs) on popular benchmark datasets for intent classification (IC) and slot filling (SF)
We introduce ILLUMINER, an approach framing IC and SF as language generation tasks for Instruct-LLMs, with a more efficient SF-prompting method compared to prior work.
A comprehensive comparison with multiple baselines shows that our approach, using the FLAN-T5 11B model, outperforms the state-of-the-art joint IC+SF method and in-context learning with GPT3.5 (175B).
arXiv Detail & Related papers (2024-03-26T09:41:21Z) - Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained
Ship Classification [62.425462136772666]
Fine-grained ship classification in remote sensing (RS-FGSC) poses a significant challenge due to the high similarity between classes and the limited availability of labeled data.
Recent advancements in large pre-trained Vision-Language Models (VLMs) have demonstrated impressive capabilities in few-shot or zero-shot learning.
This study delves into harnessing the potential of VLMs to enhance classification accuracy for unseen ship categories.
arXiv Detail & Related papers (2024-03-13T05:48:58Z) - Federated Continual Novel Class Learning [68.05835753892907]
We propose a Global Alignment Learning framework that can accurately estimate the global novel class number.
Gal achieves significant improvements in novel-class performance, increasing the accuracy by 5.1% to 10.6%.
Gal is shown to be effective in equipping a variety of different mainstream Federated Learning algorithms with novel class discovery and learning capability.
arXiv Detail & Related papers (2023-12-21T00:31:54Z) - Guiding The Last Layer in Federated Learning with Pre-Trained Models [18.382057374270143]
Federated Learning (FL) is an emerging paradigm that allows a model to be trained across a number of participants without sharing data.
We show that fitting a classification head using the Nearest Class Means (NCM) can be done exactly and orders of magnitude more efficiently than existing proposals.
arXiv Detail & Related papers (2023-06-06T18:02:02Z) - Vertical Semi-Federated Learning for Efficient Online Advertising [50.18284051956359]
Semi-VFL (Vertical Semi-Federated Learning) is proposed to achieve a practical industry application fashion for VFL.
We build an inference-efficient single-party student model applicable to the whole sample space.
New representation distillation methods are designed to extract cross-party feature correlations for both the overlapped and non-overlapped data.
arXiv Detail & Related papers (2022-09-30T17:59:27Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - On the Impact of Device and Behavioral Heterogeneity in Federated
Learning [5.038980064083677]
Federated learning (FL) is becoming a popular paradigm for collaborative learning over distributed, private datasets owned by non-trusting entities.
This paper describes the challenge of performing training over largely heterogeneous datasets, devices, and networks.
We conduct an empirical study spanning close to 1.5K unique configurations on five popular FL benchmarks.
arXiv Detail & Related papers (2021-02-15T12:04:38Z) - FedSemi: An Adaptive Federated Semi-Supervised Learning Framework [23.90642104477983]
Federated learning (FL) has emerged as an effective technique to co-training machine learning models without actually sharing data and leaking privacy.
Most existing FL methods focus on the supervised setting and ignore the utilization of unlabeled data.
We propose FedSemi, a novel, adaptive, and general framework, which firstly introduces the consistency regularization into FL using a teacher-student model.
arXiv Detail & Related papers (2020-12-06T15:46:04Z) - Prior Guided Feature Enrichment Network for Few-Shot Segmentation [64.91560451900125]
State-of-the-art semantic segmentation methods require sufficient labeled data to achieve good results.
Few-shot segmentation is proposed to tackle this problem by learning a model that quickly adapts to new classes with a few labeled support samples.
Theses frameworks still face the challenge of generalization ability reduction on unseen classes due to inappropriate use of high-level semantic information.
arXiv Detail & Related papers (2020-08-04T10:41:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.