Privacy and Trust Redefined in Federated Machine Learning
- URL: http://arxiv.org/abs/2103.15753v2
- Date: Tue, 30 Mar 2021 15:07:01 GMT
- Title: Privacy and Trust Redefined in Federated Machine Learning
- Authors: Pavlos Papadopoulos, Will Abramson, Adam J. Hall, Nikolaos Pitropakis
and William J. Buchanan
- Abstract summary: We present a privacy-preserving decentralised workflow that facilitates trusted federated learning among participants.
Only entities in possession of Verifiable Credentials issued from the appropriate authorities are able to establish secure, authenticated communication channels.
- Score: 5.4475482673944455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A common privacy issue in traditional machine learning is that data needs to
be disclosed for the training procedures. In situations with highly sensitive
data such as healthcare records, accessing this information is challenging and
often prohibited. Luckily, privacy-preserving technologies have been developed
to overcome this hurdle by distributing the computation of the training and
ensuring the data privacy to their owners. The distribution of the computation
to multiple participating entities introduces new privacy complications and
risks. In this paper, we present a privacy-preserving decentralised workflow
that facilitates trusted federated learning among participants. Our
proof-of-concept defines a trust framework instantiated using decentralised
identity technologies being developed under Hyperledger projects
Aries/Indy/Ursa. Only entities in possession of Verifiable Credentials issued
from the appropriate authorities are able to establish secure, authenticated
communication channels authorised to participate in a federated learning
workflow related to mental health data.
Related papers
- NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA [49.74911193222192]
The competition introduced a dataset of real invoice documents, along with associated questions and answers.
The base model is a multi-modal generative language model, and sensitive information could be exposed through either the visual or textual input modality.
Participants proposed elegant solutions to reduce communication costs while maintaining a minimum utility threshold.
arXiv Detail & Related papers (2024-11-06T07:51:19Z) - Towards Split Learning-based Privacy-Preserving Record Linkage [49.1574468325115]
Split Learning has been introduced to facilitate applications where user data privacy is a requirement.
In this paper, we investigate the potentials of Split Learning for Privacy-Preserving Record Matching.
arXiv Detail & Related papers (2024-09-02T09:17:05Z) - Privacy in Federated Learning [0.0]
Federated Learning (FL) represents a significant advancement in distributed machine learning.
This chapter delves into the core privacy concerns within FL, including the risks of data reconstruction, model inversion attacks, and membership inference.
It examines the trade-offs between model accuracy and privacy, emphasizing the importance of balancing these factors in practical implementations.
arXiv Detail & Related papers (2024-08-12T18:41:58Z) - Lancelot: Towards Efficient and Privacy-Preserving Byzantine-Robust Federated Learning within Fully Homomorphic Encryption [10.685816010576918]
We propose Lancelot, an innovative and computationally efficient BRFL framework that employs fully homomorphic encryption (FHE) to safeguard against malicious client activities while preserving data privacy.
Our extensive testing, which includes medical imaging diagnostics and widely-used public image datasets, demonstrates that Lancelot significantly outperforms existing methods, offering more than a twenty-fold increase in processing speed, all while maintaining data privacy.
arXiv Detail & Related papers (2024-08-12T14:48:25Z) - A Survey on Blockchain-Based Federated Learning and Data Privacy [1.0499611180329802]
Federated learning is a decentralized machine learning paradigm that allows multiple clients to collaborate by leveraging local computational power and the models transmission.
On the other hand, federated learning has the drawback of data leakage due to the lack of privacy-preserving mechanisms employed during storage, transfer, and sharing.
This survey aims to compare the performance and security of various data privacy mechanisms adopted in blockchain-based federated learning architectures.
arXiv Detail & Related papers (2023-06-29T23:43:25Z) - Privacy-Preserving Joint Edge Association and Power Optimization for the
Internet of Vehicles via Federated Multi-Agent Reinforcement Learning [74.53077322713548]
We investigate the privacy-preserving joint edge association and power allocation problem.
The proposed solution strikes a compelling trade-off, while preserving a higher privacy level than the state-of-the-art solutions.
arXiv Detail & Related papers (2023-01-26T10:09:23Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - FedOCR: Communication-Efficient Federated Learning for Scene Text
Recognition [76.26472513160425]
We study how to make use of decentralized datasets for training a robust scene text recognizer.
To make FedOCR fairly suitable to be deployed on end devices, we make two improvements including using lightweight models and hashing techniques.
arXiv Detail & Related papers (2020-07-22T14:30:50Z) - Secure Byzantine-Robust Machine Learning [61.03711813598128]
We propose a secure two-server protocol that offers both input privacy and Byzantine-robustness.
In addition, this protocol is communication-efficient, fault-tolerant and enjoys local differential privacy.
arXiv Detail & Related papers (2020-06-08T16:55:15Z) - A Distributed Trust Framework for Privacy-Preserving Machine Learning [4.282091426377838]
This paper outlines a distributed infrastructure which is used to facilitate peer-to-peer trust between distributed agents.
We detail a proof of concept using Hyperledger Aries, Decentralised Identifiers (DIDs) and Verifiable Credentials (VCs)
arXiv Detail & Related papers (2020-06-03T18:06:13Z) - PrivacyFL: A simulator for privacy-preserving and secure federated
learning [2.578242050187029]
Federated learning is a technique that enables distributed clients to collaboratively learn a shared machine learning model.
PrivacyFL is a privacy-preserving and secure federated learning simulator.
arXiv Detail & Related papers (2020-02-19T20:16:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.