Privacy and Trust Redefined in Federated Machine Learning
- URL: http://arxiv.org/abs/2103.15753v2
- Date: Tue, 30 Mar 2021 15:07:01 GMT
- Title: Privacy and Trust Redefined in Federated Machine Learning
- Authors: Pavlos Papadopoulos, Will Abramson, Adam J. Hall, Nikolaos Pitropakis
and William J. Buchanan
- Abstract summary: We present a privacy-preserving decentralised workflow that facilitates trusted federated learning among participants.
Only entities in possession of Verifiable Credentials issued from the appropriate authorities are able to establish secure, authenticated communication channels.
- Score: 5.4475482673944455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A common privacy issue in traditional machine learning is that data needs to
be disclosed for the training procedures. In situations with highly sensitive
data such as healthcare records, accessing this information is challenging and
often prohibited. Luckily, privacy-preserving technologies have been developed
to overcome this hurdle by distributing the computation of the training and
ensuring the data privacy to their owners. The distribution of the computation
to multiple participating entities introduces new privacy complications and
risks. In this paper, we present a privacy-preserving decentralised workflow
that facilitates trusted federated learning among participants. Our
proof-of-concept defines a trust framework instantiated using decentralised
identity technologies being developed under Hyperledger projects
Aries/Indy/Ursa. Only entities in possession of Verifiable Credentials issued
from the appropriate authorities are able to establish secure, authenticated
communication channels authorised to participate in a federated learning
workflow related to mental health data.
Related papers
- Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - IPFed: Identity protected federated learning for user authentication [0.9176056742068812]
We show that it is difficult to achieve both privacy preservation and high accuracy with existing methods.
We propose IPFed which is privacy-preserving federated learning using random projection for class embedding.
Experiments on face image datasets show that IPFed can protect the privacy of personal data while maintaining the accuracy of the state-of-the-art method.
arXiv Detail & Related papers (2024-05-07T02:29:41Z) - A Survey on Blockchain-Based Federated Learning and Data Privacy [1.0499611180329802]
Federated learning is a decentralized machine learning paradigm that allows multiple clients to collaborate by leveraging local computational power and the models transmission.
On the other hand, federated learning has the drawback of data leakage due to the lack of privacy-preserving mechanisms employed during storage, transfer, and sharing.
This survey aims to compare the performance and security of various data privacy mechanisms adopted in blockchain-based federated learning architectures.
arXiv Detail & Related papers (2023-06-29T23:43:25Z) - Privacy-Preserving Joint Edge Association and Power Optimization for the
Internet of Vehicles via Federated Multi-Agent Reinforcement Learning [74.53077322713548]
We investigate the privacy-preserving joint edge association and power allocation problem.
The proposed solution strikes a compelling trade-off, while preserving a higher privacy level than the state-of-the-art solutions.
arXiv Detail & Related papers (2023-01-26T10:09:23Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z) - Reliability Check via Weight Similarity in Privacy-Preserving
Multi-Party Machine Learning [7.552100672006174]
We focus on addressing the concerns of data privacy, model privacy, and data quality associated with multi-party machine learning.
We present a scheme for privacy-preserving collaborative learning that checks the participants' data quality while guaranteeing data and model privacy.
arXiv Detail & Related papers (2021-01-14T08:55:42Z) - FedOCR: Communication-Efficient Federated Learning for Scene Text
Recognition [76.26472513160425]
We study how to make use of decentralized datasets for training a robust scene text recognizer.
To make FedOCR fairly suitable to be deployed on end devices, we make two improvements including using lightweight models and hashing techniques.
arXiv Detail & Related papers (2020-07-22T14:30:50Z) - Secure Byzantine-Robust Machine Learning [61.03711813598128]
We propose a secure two-server protocol that offers both input privacy and Byzantine-robustness.
In addition, this protocol is communication-efficient, fault-tolerant and enjoys local differential privacy.
arXiv Detail & Related papers (2020-06-08T16:55:15Z) - A Distributed Trust Framework for Privacy-Preserving Machine Learning [4.282091426377838]
This paper outlines a distributed infrastructure which is used to facilitate peer-to-peer trust between distributed agents.
We detail a proof of concept using Hyperledger Aries, Decentralised Identifiers (DIDs) and Verifiable Credentials (VCs)
arXiv Detail & Related papers (2020-06-03T18:06:13Z) - PrivacyFL: A simulator for privacy-preserving and secure federated
learning [2.578242050187029]
Federated learning is a technique that enables distributed clients to collaboratively learn a shared machine learning model.
PrivacyFL is a privacy-preserving and secure federated learning simulator.
arXiv Detail & Related papers (2020-02-19T20:16:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.