Partial Federated Learning
- URL: http://arxiv.org/abs/2403.01615v1
- Date: Sun, 3 Mar 2024 21:04:36 GMT
- Title: Partial Federated Learning
- Authors: Tiantian Feng, Anil Ramakrishna, Jimit Majmudar, Charith Peris, Jixuan
Wang, Clement Chung, Richard Zemel, Morteza Ziyadi, Rahul Gupta
- Abstract summary: Federated Learning (FL) is a popular algorithm to train machine learning models on user data constrained to edge devices.
We propose a new algorithm called Partial Federated Learning (PartialFL), where a machine learning model is trained using data where a subset of data modalities can be made available to the server.
- Score: 26.357723187375665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a popular algorithm to train machine learning
models on user data constrained to edge devices (for example, mobile phones)
due to privacy concerns. Typically, FL is trained with the assumption that no
part of the user data can be egressed from the edge. However, in many
production settings, specific data-modalities/meta-data are limited to be on
device while others are not. For example, in commercial SLU systems, it is
typically desired to prevent transmission of biometric signals (such as audio
recordings of the input prompt) to the cloud, but egress of locally (i.e. on
the edge device) transcribed text to the cloud may be possible. In this work,
we propose a new algorithm called Partial Federated Learning (PartialFL), where
a machine learning model is trained using data where a subset of data
modalities or their intermediate representations can be made available to the
server. We further restrict our model training by preventing the egress of data
labels to the cloud for better privacy, and instead use a contrastive learning
based model objective. We evaluate our approach on two different multi-modal
datasets and show promising results with our proposed approach.
Related papers
- Stalactite: Toolbox for Fast Prototyping of Vertical Federated Learning Systems [37.11550251825938]
We present emphStalactite - an open-source framework for Vertical Federated Learning (VFL) systems.
VFL is a type of FL where data samples are divided by features across several data owners.
We demonstrate its use on a real-world recommendation datasets.
arXiv Detail & Related papers (2024-09-23T21:29:03Z) - Can Public Large Language Models Help Private Cross-device Federated Learning? [58.05449579773249]
We study (differentially) private federated learning (FL) of language models.
Public data has been used to improve privacy-utility trade-offs for both large and small language models.
We propose a novel distribution matching algorithm with theoretical grounding to sample public data close to private data distribution.
arXiv Detail & Related papers (2023-05-20T07:55:58Z) - Federated Nearest Neighbor Machine Translation [66.8765098651988]
In this paper, we propose a novel federated nearest neighbor (FedNN) machine translation framework.
FedNN leverages one-round memorization-based interaction to share knowledge across different clients.
Experiments show that FedNN significantly reduces computational and communication costs compared with FedAvg.
arXiv Detail & Related papers (2023-02-23T18:04:07Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Split Federated Learning on Micro-controllers: A Keyword Spotting
Showcase [1.4794135558227681]
Federated Learning is proposed as a private learning scheme, using which users can locally train the model without collecting users' raw data to servers.
In this work, we implement a simply SFL framework on the Arduino board and verify its correctness on the Chinese digits audio dataset for keyword spotting application with over 90% accuracy.
On the English digits audio dataset, our SFL implementation achieves 13.89% higher accuracy compared to a state-of-the-art FL implementation.
arXiv Detail & Related papers (2022-10-04T23:42:45Z) - Federated Split GANs [12.007429155505767]
We propose an alternative approach to train ML models in user's devices themselves.
We focus on GANs (generative adversarial networks) and leverage their inherent privacy-preserving attribute.
Our system preserves data privacy, keeps a short training time, and yields same accuracy of model training in unconstrained devices.
arXiv Detail & Related papers (2022-07-04T23:53:47Z) - Semi-FedSER: Semi-supervised Learning for Speech Emotion Recognition On
Federated Learning using Multiview Pseudo-Labeling [43.17379040854574]
Speech Emotion Recognition (SER) application is frequently associated with privacy concerns.
Federated learning (FL) is a distributed machine learning algorithm that coordinates clients to train a model collaboratively without sharing local data.
In this work, we propose a semi-supervised federated learning framework, Semi-FedSER, that utilizes both labeled and unlabeled data samples to address the challenge of limited data samples in FL.
arXiv Detail & Related papers (2022-03-15T21:50:43Z) - Federated Learning-based Active Authentication on Mobile Devices [98.23904302910022]
User active authentication on mobile devices aims to learn a model that can correctly recognize the enrolled user based on device sensor information.
We propose a novel user active authentication training, termed as Federated Active Authentication (FAA)
We show that existing FL/SL methods are suboptimal for FAA as they rely on the data to be distributed homogeneously.
arXiv Detail & Related papers (2021-04-14T22:59:08Z) - Federated Learning of User Authentication Models [69.93965074814292]
We propose Federated User Authentication (FedUA), a framework for privacy-preserving training of machine learning models.
FedUA adopts federated learning framework to enable a group of users to jointly train a model without sharing the raw inputs.
We show our method is privacy-preserving, scalable with number of users, and allows new users to be added to training without changing the output layer.
arXiv Detail & Related papers (2020-07-09T08:04:38Z) - An On-Device Federated Learning Approach for Cooperative Model Update
between Edge Devices [2.99321624683618]
A neural-network based on-device learning approach is recently proposed, so that edge devices train incoming data at runtime to update their model.
In this paper, we focus on OS-ELM to sequentially train a model based on recent samples and combine it with autoencoder for anomaly detection.
We extend it for an on-device federated learning so that edge devices can exchange their trained results and update their model by using those collected from the other edge devices.
arXiv Detail & Related papers (2020-02-27T18:15:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.