Efficient and Secure Federated Learning for Financial Applications
- URL: http://arxiv.org/abs/2303.08355v1
- Date: Wed, 15 Mar 2023 04:15:51 GMT
- Title: Efficient and Secure Federated Learning for Financial Applications
- Authors: Tao Liu, Zhi Wang, Hui He, Wei Shi, Liangliang Lin, Wei Shi, Ran An,
Chenhao Li
- Abstract summary: This article proposes two sparsification methods to reduce communication cost in federated learning.
One is a time-varying hierarchical sparsification method for model parameter update, which solves the problem of maintaining model accuracy after high ratio sparsity.
The other is to apply the sparsification method to the secure aggregation framework.
- Score: 15.04345368582332
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The conventional machine learning (ML) and deep learning approaches need to
share customers' sensitive information with an external credit bureau to
generate a prediction model that opens the door to privacy leakage. This
leakage risk makes financial companies face an enormous challenge in their
cooperation. Federated learning is a machine learning setting that can protect
data privacy, but the high communication cost is often the bottleneck of the
federated systems, especially for large neural networks. Limiting the number
and size of communications is necessary for the practical training of large
neural structures. Gradient sparsification has received increasing attention as
a method to reduce communication cost, which only updates significant gradients
and accumulates insignificant gradients locally. However, the secure
aggregation framework cannot directly use gradient sparsification. This article
proposes two sparsification methods to reduce communication cost in federated
learning. One is a time-varying hierarchical sparsification method for model
parameter update, which solves the problem of maintaining model accuracy after
high ratio sparsity. It can significantly reduce the cost of a single
communication. The other is to apply the sparsification method to the secure
aggregation framework. We sparse the encryption mask matrix to reduce the cost
of communication while protecting privacy. Experiments show that under
different Non-IID experiment settings, our method can reduce the upload
communication cost to about 2.9% to 18.9% of the conventional federated
learning algorithm when the sparse rate is 0.01.
Related papers
- Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Federated Hyperdimensional Computing [14.844383542052169]
Federated learning (FL) enables a loose set of participating clients to collaboratively learn a global model via coordination by a central server.
Existing FL approaches rely on complex algorithms with massive models, such as deep neural networks (DNNs)
We first propose FedHDC, a federated learning framework based on hyperdimensional computing (HDC)
arXiv Detail & Related papers (2023-12-26T09:24:19Z) - Enhancing Heterogeneous Federated Learning with Knowledge Extraction and
Multi-Model Fusion [9.106417025722756]
This paper presents a new federated learning (FL) method that trains machine learning models on edge devices without accessing sensitive data.
We propose a resource-aware FL method that aggregates local knowledge from edge models and distills it into robust global knowledge through knowledge distillation.
Our method improves communication cost and performance in heterogeneous data and models compared to existing FL algorithms.
arXiv Detail & Related papers (2022-08-16T22:31:50Z) - Sparsified Secure Aggregation for Privacy-Preserving Federated Learning [1.2891210250935146]
We propose a lightweight gradient sparsification framework for secure aggregation.
Our theoretical analysis demonstrates that the proposed framework can significantly reduce the communication overhead of secure aggregation.
Our experiments demonstrate that our framework reduces the communication overhead by up to 7.8x, while also speeding up the wall clock training time by 1.13x, when compared to conventional secure aggregation benchmarks.
arXiv Detail & Related papers (2021-12-23T22:44:21Z) - SPATL: Salient Parameter Aggregation and Transfer Learning for
Heterogeneous Clients in Federated Learning [3.5394650810262336]
Efficient federated learning is one of the key challenges for training and deploying AI models on edge devices.
Maintaining data privacy in federated learning raises several challenges including data heterogeneity, expensive communication cost, and limited resources.
We propose a salient parameter selection agent based on deep reinforcement learning on local clients, and aggregating the selected salient parameters on the central server.
arXiv Detail & Related papers (2021-11-29T06:28:05Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training [65.68511423300812]
We propose ProgFed, a progressive training framework for efficient and effective federated learning.
ProgFed inherently reduces computation and two-way communication costs while maintaining the strong performance of the final models.
Our results show that ProgFed converges at the same rate as standard training on full models.
arXiv Detail & Related papers (2021-10-11T14:45:00Z) - Efficient and Private Federated Learning with Partially Trainable
Networks [8.813191488656527]
We propose to leverage partially trainable neural networks, which freeze a portion of the model parameters during the entire training process.
We empirically show that Federated learning of Partially Trainable neural networks (FedPT) can result in superior communication-accuracy trade-offs.
Our approach also enables faster training, with a smaller memory footprint, and better utility for strong differential privacy guarantees.
arXiv Detail & Related papers (2021-10-06T04:28:33Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.