Federated Learning System without Model Sharing through Integration of
Dimensional Reduced Data Representations
- URL: http://arxiv.org/abs/2011.06803v1
- Date: Fri, 13 Nov 2020 08:12:00 GMT
- Title: Federated Learning System without Model Sharing through Integration of
Dimensional Reduced Data Representations
- Authors: Anna Bogdanova, Akie Nakai, Yukihiko Okada, Akira Imakura, and Tetsuya
Sakurai
- Abstract summary: We explore an alternative federated learning system that enables integration of dimensionality reduced representations of distributed data prior to a supervised learning task.
We compare the performance of this approach on image classification tasks to three alternative frameworks: centralized machine learning, individual machine learning, and Federated Averaging.
Our results show that our approach can achieve similar accuracy as Federated Averaging and performs better than Federated Averaging in a small-user setting.
- Score: 6.9485501711137525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dimensionality Reduction is a commonly used element in a machine learning
pipeline that helps to extract important features from high-dimensional data.
In this work, we explore an alternative federated learning system that enables
integration of dimensionality reduced representations of distributed data prior
to a supervised learning task, thus avoiding model sharing among the parties.
We compare the performance of this approach on image classification tasks to
three alternative frameworks: centralized machine learning, individual machine
learning, and Federated Averaging, and analyze potential use cases for a
federated learning system without model sharing. Our results show that our
approach can achieve similar accuracy as Federated Averaging and performs
better than Federated Averaging in a small-user setting.
Related papers
- Interpretable Data Fusion for Distributed Learning: A Representative Approach via Gradient Matching [19.193379036629167]
This paper introduces a representative-based approach for distributed learning that transforms multiple raw data points into a virtual representation.
It achieves this by condensing extensive datasets into digestible formats, thus fostering intuitive human-machine interactions.
arXiv Detail & Related papers (2024-05-06T18:21:41Z) - Personalized Federated Learning via Sequential Layer Expansion in Representation Learning [0.0]
Federated learning ensures the privacy of clients by conducting distributed training on individual client devices and sharing only the model weights with a central server.
We propose a new representation learning-based approach that suggests decoupling the entire deep learning model into more densely divided parts with the application of suitable scheduling methods.
arXiv Detail & Related papers (2024-04-27T06:37:19Z) - Achieving Transparency in Distributed Machine Learning with Explainable
Data Collaboration [5.994347858883343]
A parallel trend has been to train machine learning models in collaboration with other data holders without accessing their data.
This paper presents an Explainable Data Collaboration Framework based on a model-agnostic additive feature attribution algorithm.
arXiv Detail & Related papers (2022-12-06T23:53:41Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - Clustering augmented Self-Supervised Learning: Anapplication to Land
Cover Mapping [10.720852987343896]
We introduce a new method for land cover mapping by using a clustering based pretext task for self-supervised learning.
We demonstrate the effectiveness of the method on two societally relevant applications.
arXiv Detail & Related papers (2021-08-16T19:35:43Z) - Multi-Pretext Attention Network for Few-shot Learning with
Self-supervision [37.6064643502453]
We propose a novel augmentation-free method for self-supervised learning, which does not rely on any auxiliary sample.
Besides, we propose Multi-pretext Attention Network (MAN), which exploits a specific attention mechanism to combine the traditional augmentation-relied methods and our GC.
We evaluate our MAN extensively on miniImageNet and tieredImageNet datasets and the results demonstrate that the proposed method outperforms the state-of-the-art (SOTA) relevant methods.
arXiv Detail & Related papers (2021-03-10T10:48:37Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.