Federated Self-Supervised Learning for Acoustic Event Classification
- URL: http://arxiv.org/abs/2203.11997v1
- Date: Tue, 22 Mar 2022 18:49:52 GMT
- Title: Federated Self-Supervised Learning for Acoustic Event Classification
- Authors: Meng Feng, Chieh-Chi Kao, Qingming Tang, Ming Sun, Viktor Rozgic,
Spyros Matsoukas, Chao Wang
- Abstract summary: Federated learning (FL) is a framework that decouples data collection and model training to enhance customer privacy.
We adapt self-supervised learning to the FL framework for on-device continual learning of representations.
Compared to the baseline w/o FL, the proposed method improves precision up to 20.3% relatively while maintaining the recall.
- Score: 23.27204234096171
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Standard acoustic event classification (AEC) solutions require large-scale
collection of data from client devices for model optimization. Federated
learning (FL) is a compelling framework that decouples data collection and
model training to enhance customer privacy. In this work, we investigate the
feasibility of applying FL to improve AEC performance while no customer data
can be directly uploaded to the server. We assume no pseudo labels can be
inferred from on-device user inputs, aligning with the typical use cases of
AEC. We adapt self-supervised learning to the FL framework for on-device
continual learning of representations, and it results in improved performance
of the downstream AEC classifiers without labeled/pseudo-labeled data
available. Compared to the baseline w/o FL, the proposed method improves
precision up to 20.3\% relatively while maintaining the recall. Our work
differs from prior work in FL in that our approach does not require
user-generated learning targets, and the data we use is collected from our Beta
program and is de-identified, to maximally simulate the production settings.
Related papers
- Lightweight Industrial Cohorted Federated Learning for Heterogeneous Assets [0.0]
Federated Learning (FL) is the most widely adopted collaborative learning approach for training decentralized Machine Learning (ML) models.
However, since great data similarity or homogeneity is taken for granted in all FL tasks, FL is still not specifically designed for the industrial setting.
We propose a Lightweight Industrial Cohorted FL (LICFL) algorithm that uses model parameters for cohorting without any additional on-edge (clientlevel) computations and communications.
arXiv Detail & Related papers (2024-07-25T12:48:56Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - REPA: Client Clustering without Training and Data Labels for Improved
Federated Learning in Non-IID Settings [1.69188400758521]
We present REPA, an approach to client clustering in non-IID FL settings that requires neither training nor labeled data collection.
REPA uses a novel supervised autoencoder-based method to create embeddings that profile a client's underlying data-generating processes without exposing the data to the server and without requiring local training.
arXiv Detail & Related papers (2023-09-25T12:30:43Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - ON-DEMAND-FL: A Dynamic and Efficient Multi-Criteria Federated Learning
Client Deployment Scheme [37.099990745974196]
We introduce an On-Demand-FL, a client deployment approach for federated learning.
We make use of containerization technology such as Docker to build efficient environments.
The Genetic algorithm (GA) is used to solve the multi-objective optimization problem.
arXiv Detail & Related papers (2022-11-05T13:41:19Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Labeling Chaos to Learning Harmony: Federated Learning with Noisy Labels [3.4620497416430456]
Federated Learning (FL) is a distributed machine learning paradigm that enables learning models from decentralized private datasets.
We propose FedLN, a framework to deal with label noise across different FL training stages.
Our evaluation on various publicly available vision and audio datasets demonstrate a 22% improvement on average compared to other existing methods for a label noise level of 60%.
arXiv Detail & Related papers (2022-08-19T14:47:40Z) - Federated Learning from Only Unlabeled Data with
Class-Conditional-Sharing Clients [98.22390453672499]
Supervised federated learning (FL) enables multiple clients to share the trained model without sharing their labeled data.
We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients.
arXiv Detail & Related papers (2022-04-07T09:12:00Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - FOCUS: Dealing with Label Quality Disparity in Federated Learning [25.650278226178298]
We propose Federated Opportunistic Computing for Ubiquitous Systems (FOCUS) to address this challenge.
FOCUS quantifies the credibility of the client local data without directly observing them.
It effectively identifies clients with noisy labels and reduces their impact on the model performance.
arXiv Detail & Related papers (2020-01-29T09:31:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.