Zero-Shot Federated Learning with New Classes for Audio Classification
- URL: http://arxiv.org/abs/2106.10019v1
- Date: Fri, 18 Jun 2021 09:32:19 GMT
- Title: Zero-Shot Federated Learning with New Classes for Audio Classification
- Authors: Gautham Krishna Gudur, Satheesh K. Perepu
- Abstract summary: Federated learning is an effective way of extracting insights from different user devices.
New classes with completely unseen data distributions can stream across any device in a federated learning setting.
We propose a unified zero-shot framework to handle these aforementioned challenges during federated learning.
- Score: 0.7106986689736827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning is an effective way of extracting insights from different
user devices while preserving the privacy of users. However, new classes with
completely unseen data distributions can stream across any device in a
federated learning setting, whose data cannot be accessed by the global server
or other users. To this end, we propose a unified zero-shot framework to handle
these aforementioned challenges during federated learning. We simulate two
scenarios here -- 1) when the new class labels are not reported by the user,
the traditional FL setting is used; 2) when new class labels are reported by
the user, we synthesize Anonymized Data Impressions by calculating class
similarity matrices corresponding to each device's new classes followed by
unsupervised clustering to distinguish between new classes across different
users. Moreover, our proposed framework can also handle statistical
heterogeneities in both labels and models across the participating users. We
empirically evaluate our framework on-device across different communication
rounds (FL iterations) with new classes in both local and global updates, along
with heterogeneous labels and models, on two widely used audio classification
applications -- keyword spotting and urban sound classification, and observe an
average deterministic accuracy increase of ~4.041% and ~4.258% respectively.
Related papers
- Federated Learning with Only Positive Labels by Exploring Label Correlations [78.59613150221597]
Federated learning aims to collaboratively learn a model by using the data from multiple users under privacy constraints.
In this paper, we study the multi-label classification problem under the federated learning setting.
We propose a novel and generic method termed Federated Averaging by exploring Label Correlations (FedALC)
arXiv Detail & Related papers (2024-04-24T02:22:50Z) - Robust Semi-Supervised Learning for Self-learning Open-World Classes [5.714673612282175]
In real-world applications, unlabeled data always contain classes not present in the labeled set.
We propose an open-world SSL method for Self-learning Open-world Classes (SSOC), which can explicitly self-learn multiple unknown classes.
SSOC outperforms the state-of-the-art baselines on multiple popular classification benchmarks.
arXiv Detail & Related papers (2024-01-15T09:27:46Z) - SegPrompt: Boosting Open-world Segmentation via Category-level Prompt
Learning [49.17344010035996]
Open-world instance segmentation (OWIS) models detect unknown objects in a class-agnostic manner.
Previous OWIS approaches completely erase category information during training to keep the model's ability to generalize to unknown objects.
We propose a novel training mechanism termed SegPrompt that uses category information to improve the model's class-agnostic segmentation ability.
arXiv Detail & Related papers (2023-08-12T11:25:39Z) - Towards Open-Domain Topic Classification [69.21234350688098]
We introduce an open-domain topic classification system that accepts user-defined taxonomy in real time.
Users will be able to classify a text snippet with respect to any candidate labels they want, and get instant response from our web interface.
arXiv Detail & Related papers (2023-06-29T20:25:28Z) - Federated Semi-Supervised Learning with Annotation Heterogeneity [57.12560313403097]
We propose a novel framework called Heterogeneously Annotated Semi-Supervised LEarning (HASSLE)
It is a dual-model framework with two models trained separately on labeled and unlabeled data.
The dual models can implicitly learn from both types of data across different clients, although each dual model is only trained locally on a single type of data.
arXiv Detail & Related papers (2023-03-04T16:04:49Z) - Navigating Alignment for Non-identical Client Class Sets: A Label
Name-Anchored Federated Learning Framework [26.902679793955972]
FedAlign is a novel framework to align latent spaces across clients from both label and data perspectives.
From a label perspective, we leverage the expressive natural language class names as a common ground for label encoders to anchor class representations.
From a data perspective, we regard the global class representations as anchors and leverage the data points that are close/far enough to the anchors of locally-unaware classes to align the data encoders across clients.
arXiv Detail & Related papers (2023-01-01T23:17:30Z) - Instance-based Label Smoothing For Better Calibrated Classification
Networks [3.388509725285237]
Label smoothing is widely used in deep neural networks for multi-class classification.
We take inspiration from both label smoothing and self-distillation.
We propose two novel instance-based label smoothing approaches.
arXiv Detail & Related papers (2021-10-11T15:33:23Z) - SemiFed: Semi-supervised Federated Learning with Consistency and
Pseudo-Labeling [14.737638416823772]
Federated learning enables multiple clients, such as mobile phones and organizations, to collaboratively learn a shared model for prediction.
In this work, we focus on a new scenario for cross-silo federated learning, where data samples of each client are partially labeled.
We propose a new framework dubbed SemiFed that unifies two dominant approaches for semi-supervised learning: consistency regularization and pseudo-labeling.
arXiv Detail & Related papers (2021-08-21T01:14:27Z) - Open-World Semi-Supervised Learning [66.90703597468377]
We introduce a new open-world semi-supervised learning setting in which the model is required to recognize previously seen classes.
We propose ORCA, an approach that learns to simultaneously classify and cluster the data.
We demonstrate that ORCA accurately discovers novel classes and assigns samples to previously seen classes on benchmark image classification datasets.
arXiv Detail & Related papers (2021-02-06T07:11:07Z) - Federated Learning with Only Positive Labels [71.63836379169315]
We propose a generic framework for training with only positive labels, namely Federated Averaging with Spreadout (FedAwS)
We show, both theoretically and empirically, that FedAwS can almost match the performance of conventional learning where users have access to negative labels.
arXiv Detail & Related papers (2020-04-21T23:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.