A decentralized aggregation mechanism for training deep learning models
using smart contract system for bank loan prediction
- URL: http://arxiv.org/abs/2011.10981v1
- Date: Sun, 22 Nov 2020 10:47:45 GMT
- Title: A decentralized aggregation mechanism for training deep learning models
using smart contract system for bank loan prediction
- Authors: Pratik Ratadiya, Khushi Asawa, Omkar Nikhal
- Abstract summary: We present a solution to benefit from a distributed data setup in the case of training deep learning architectures by making use of a smart contract system.
We propose a mechanism that aggregates together the intermediate representations obtained from local ANN models over a blockchain.
The obtained performance, which is better than that of individual nodes, is at par with that of a centralized data setup.
- Score: 0.1933681537640272
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Data privacy and sharing has always been a critical issue when trying to
build complex deep learning-based systems to model data. Facilitation of a
decentralized approach that could take benefit from data across multiple nodes
while not needing to merge their data contents physically has been an area of
active research. In this paper, we present a solution to benefit from a
distributed data setup in the case of training deep learning architectures by
making use of a smart contract system. Specifically, we propose a mechanism
that aggregates together the intermediate representations obtained from local
ANN models over a blockchain. Training of local models takes place on their
respective data. The intermediate representations derived from them, when
combined and trained together on the host node, helps to get a more accurate
system. While federated learning primarily deals with the same features of data
where the number of samples being distributed on multiple nodes, here we are
dealing with the same number of samples but with their features being
distributed on multiple nodes. We consider the task of bank loan prediction
wherein the personal details of an individual and their bank-specific details
may not be available at the same place. Our aggregation mechanism helps to
train a model on such existing distributed data without having to share and
concatenate together the actual data values. The obtained performance, which is
better than that of individual nodes, and is at par with that of a centralized
data setup makes a strong case for extending our technique across other
architectures and tasks. The solution finds its application in organizations
that want to train deep learning models on vertically partitioned data.
Related papers
- Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Decentralized Training of Foundation Models in Heterogeneous
Environments [77.47261769795992]
Training foundation models, such as GPT-3 and PaLM, can be extremely expensive.
We present the first study of training large foundation models with model parallelism in a decentralized regime over a heterogeneous network.
arXiv Detail & Related papers (2022-06-02T20:19:51Z) - Implicit Model Specialization through DAG-based Decentralized Federated
Learning [0.0]
Federated learning allows a group of distributed clients to train a common machine learning model on private data.
We propose a unified approach to decentralization and personalization in federated learning.
Our evaluation shows that the specialization of models emerges directly from the DAG-based communication of model updates.
arXiv Detail & Related papers (2021-11-01T20:55:47Z) - A communication efficient distributed learning framework for smart
environments [0.4898659895355355]
This paper proposes a distributed learning framework to move data analytics closer to where data is generated.
Using distributed machine learning techniques, it is possible to drastically reduce the network overhead, while obtaining performance comparable to the cloud solution.
The analysis also shows when each distributed learning approach is preferable, based on the specific distribution of the data on the nodes.
arXiv Detail & Related papers (2021-09-27T13:44:34Z) - Decentralized federated learning of deep neural networks on non-iid data [0.6335848702857039]
We tackle the non-problem of learning a personalized deep learning model in a decentralized setting.
We propose a method named Performance-Based Neighbor Selection (PENS) where clients with similar data detect each other and cooperate.
PENS is able to achieve higher accuracies as compared to strong baselines.
arXiv Detail & Related papers (2021-07-18T19:05:44Z) - Multi-modal AsynDGAN: Learn From Distributed Medical Image Data without
Sharing Private Information [55.866673486753115]
We propose an extendable and elastic learning framework to preserve privacy and security.
The proposed framework is named distributed Asynchronized Discriminator Generative Adrial Networks (AsynDGAN)
arXiv Detail & Related papers (2020-12-15T20:41:24Z) - Decentralised Learning from Independent Multi-Domain Labels for Person
Re-Identification [69.29602103582782]
Deep learning has been successful for many computer vision tasks due to the availability of shared and centralised large-scale training data.
However, increasing awareness of privacy concerns poses new challenges to deep learning, especially for person re-identification (Re-ID)
We propose a novel paradigm called Federated Person Re-Identification (FedReID) to construct a generalisable global model (a central server) by simultaneously learning with multiple privacy-preserved local models (local clients)
This client-server collaborative learning process is iteratively performed under privacy control, enabling FedReID to realise decentralised learning without sharing distributed data nor collecting any
arXiv Detail & Related papers (2020-06-07T13:32:33Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Evaluation Framework For Large-scale Federated Learning [10.127616622630514]
Federated learning is proposed as a machine learning setting to enable distributed edge devices, such as mobile phones, to collaboratively learn a shared prediction model.
In this paper, we introduce a framework designed for large-scale federated learning which consists of approaches to generating dataset and modular evaluation framework.
arXiv Detail & Related papers (2020-03-03T15:12:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.