Multimodal Federated Learning
- URL: http://arxiv.org/abs/2109.04833v1
- Date: Fri, 10 Sep 2021 12:32:46 GMT
- Title: Multimodal Federated Learning
- Authors: Yuchen Zhao, Payam Barnaghi, Hamed Haddadi
- Abstract summary: In many applications, such as smart homes with IoT devices, local data on clients are generated from different modalities.
Existing federated learning systems only work on local data from a single modality, which limits the scalability of the systems.
We propose a multimodal and semi-supervised federated learning framework that trains autoencoders to extract shared or correlated representations from different local data modalities on clients.
- Score: 9.081857621783811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is proposed as an alternative to centralized machine
learning since its client-server structure provides better privacy protection
and scalability in real-world applications. In many applications, such as smart
homes with IoT devices, local data on clients are generated from different
modalities such as sensory, visual, and audio data. Existing federated learning
systems only work on local data from a single modality, which limits the
scalability of the systems.
In this paper, we propose a multimodal and semi-supervised federated learning
framework that trains autoencoders to extract shared or correlated
representations from different local data modalities on clients. In addition,
we propose a multimodal FedAvg algorithm to aggregate local autoencoders
trained on different data modalities. We use the learned global autoencoder for
a downstream classification task with the help of auxiliary labelled data on
the server. We empirically evaluate our framework on different modalities
including sensory data, depth camera videos, and RGB camera videos. Our
experimental results demonstrate that introducing data from multiple modalities
into federated learning can improve its accuracy. In addition, we can use
labelled data from only one modality for supervised learning on the server and
apply the learned model to testing data from other modalities to achieve decent
accuracy (e.g., approximately 70% as the best performance), especially when
combining contributions from both unimodal clients and multimodal clients.
Related papers
- FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Cross-domain Federated Object Detection [43.66352018668227]
Federated learning can enable multi-party collaborative learning without leaking client data.
We propose a cross-domain federated object detection framework, named FedOD.
arXiv Detail & Related papers (2022-06-30T03:09:59Z) - DQRE-SCnet: A novel hybrid approach for selecting users in Federated
Learning with Deep-Q-Reinforcement Learning based on Spectral Clustering [1.174402845822043]
Machine learning models based on sensitive data in the real-world promise advances in areas ranging from medical screening to disease outbreaks, agriculture, industry, defense science, and more.
In many applications, learning participant communication rounds benefit from collecting their own private data sets, teaching detailed machine learning models on the real data, and sharing the benefits of using these models.
Due to existing privacy and security concerns, most people avoid sensitive data sharing for training. Without each user demonstrating their local data to a central server, Federated Learning allows various parties to train a machine learning algorithm on their shared data jointly.
arXiv Detail & Related papers (2021-11-07T15:14:29Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Decentralised Learning from Independent Multi-Domain Labels for Person
Re-Identification [69.29602103582782]
Deep learning has been successful for many computer vision tasks due to the availability of shared and centralised large-scale training data.
However, increasing awareness of privacy concerns poses new challenges to deep learning, especially for person re-identification (Re-ID)
We propose a novel paradigm called Federated Person Re-Identification (FedReID) to construct a generalisable global model (a central server) by simultaneously learning with multiple privacy-preserved local models (local clients)
This client-server collaborative learning process is iteratively performed under privacy control, enabling FedReID to realise decentralised learning without sharing distributed data nor collecting any
arXiv Detail & Related papers (2020-06-07T13:32:33Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.