Gradient Assisted Learning
- URL: http://arxiv.org/abs/2106.01425v1
- Date: Wed, 2 Jun 2021 19:12:03 GMT
- Title: Gradient Assisted Learning
- Authors: Enmao Diao, Jie Ding, Vahid Tarokh
- Abstract summary: We propose a new method for various entities to assist each other in supervised learning tasks without sharing data, models, and objective functions.
In this framework, all participants collaboratively optimize the aggregate of local loss functions, and each participant autonomously builds its own model.
Experimental studies demonstrate that Gradient Assisted Learning can achieve performance close to centralized learning when all data, models, and objective functions are fully disclosed.
- Score: 34.24028216079336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In distributed settings, collaborations between different entities, such as
financial institutions, medical centers, and retail markets, are crucial to
providing improved service and performance. However, the underlying entities
may have little interest in sharing their private data, proprietary models, and
objective functions. These privacy requirements have created new challenges for
collaboration. In this work, we propose Gradient Assisted Learning (GAL), a new
method for various entities to assist each other in supervised learning tasks
without sharing data, models, and objective functions. In this framework, all
participants collaboratively optimize the aggregate of local loss functions,
and each participant autonomously builds its own model by iteratively fitting
the gradients of the objective function. Experimental studies demonstrate that
Gradient Assisted Learning can achieve performance close to centralized
learning when all data, models, and objective functions are fully disclosed.
Related papers
- Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - Task-Agnostic Federated Learning [4.041327615026293]
This study addresses task-agnostic and generalization problem on un-seen tasks by adapting self-supervised FL framework.
utilizing Vision Transformer (ViT) as consensus feature encoder for self-supervised pre-training, no initial labels required, the framework enabling effective representation learning across diverse datasets and tasks.
arXiv Detail & Related papers (2024-06-25T02:53:37Z) - Distributed Continual Learning [12.18012293738896]
We introduce a mathematical framework capturing the essential aspects of distributed continual learning.
We identify three modes of information exchange: data instances, full model parameters, and modular (partial) model parameters.
Our findings reveal three key insights: sharing parameters is more efficient than sharing data as tasks become more complex.
arXiv Detail & Related papers (2024-05-23T21:24:26Z) - Collaborative Active Learning in Conditional Trust Environment [1.3846014191157405]
We investigate collaborative active learning, a paradigm in which multiple collaborators explore a new domain by leveraging their combined machine learning capabilities without disclosing their existing data and models.
This collaboration offers several advantages: (a) it addresses privacy and security concerns by eliminating the need for direct model and data disclosure; (b) it enables the use of different data sources and insights without direct data exchange; and (c) it promotes cost-effectiveness and resource efficiency through shared labeling costs.
arXiv Detail & Related papers (2024-03-27T10:40:27Z) - Functional Knowledge Transfer with Self-supervised Representation
Learning [11.566644244783305]
This work investigates the unexplored usability of self-supervised representation learning in the direction of functional knowledge transfer.
In this work, functional knowledge transfer is achieved by joint optimization of self-supervised learning pseudo task and supervised learning task.
arXiv Detail & Related papers (2023-03-12T21:14:59Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Self-supervised Learning for Heterogeneous Clients [20.33482170846688]
We propose a unified and systematic framework, emphHeterogeneous Self-supervised Federated Learning (Hetero-SSFL) for enabling self-supervised learning with federation on heterogeneous clients.
The proposed framework allows representation learning across all the clients without imposing architectural constraints or requiring presence of labeled data.
We empirically demonstrate that our proposed approach outperforms the state of the art methods by a significant margin.
arXiv Detail & Related papers (2022-05-25T05:07:44Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.