A Distributed Trust Framework for Privacy-Preserving Machine Learning
- URL: http://arxiv.org/abs/2006.02456v1
- Date: Wed, 3 Jun 2020 18:06:13 GMT
- Title: A Distributed Trust Framework for Privacy-Preserving Machine Learning
- Authors: Will Abramson, Adam James Hall, Pavlos Papadopoulos, Nikolaos
Pitropakis, William J Buchanan
- Abstract summary: This paper outlines a distributed infrastructure which is used to facilitate peer-to-peer trust between distributed agents.
We detail a proof of concept using Hyperledger Aries, Decentralised Identifiers (DIDs) and Verifiable Credentials (VCs)
- Score: 4.282091426377838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When training a machine learning model, it is standard procedure for the
researcher to have full knowledge of both the data and model. However, this
engenders a lack of trust between data owners and data scientists. Data owners
are justifiably reluctant to relinquish control of private information to third
parties. Privacy-preserving techniques distribute computation in order to
ensure that data remains in the control of the owner while learning takes
place. However, architectures distributed amongst multiple agents introduce an
entirely new set of security and trust complications. These include data
poisoning and model theft. This paper outlines a distributed infrastructure
which is used to facilitate peer-to-peer trust between distributed agents;
collaboratively performing a privacy-preserving workflow. Our outlined
prototype sets industry gatekeepers and governance bodies as credential
issuers. Before participating in the distributed learning workflow, malicious
actors must first negotiate valid credentials. We detail a proof of concept
using Hyperledger Aries, Decentralised Identifiers (DIDs) and Verifiable
Credentials (VCs) to establish a distributed trust architecture during a
privacy-preserving machine learning experiment. Specifically, we utilise secure
and authenticated DID communication channels in order to facilitate a federated
learning workflow related to mental health care data.
Related papers
- Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Blockchain-Based Federated Learning: Incentivizing Data Sharing and
Penalizing Dishonest Behavior [0.0]
This paper proposes a comprehensive framework that integrates data trust in federated learning with InterPlanetary File System, blockchain, and smart contracts.
The proposed model is effective in improving the accuracy of federated learning models while ensuring the security and fairness of the data-sharing process.
The research paper also presents a decentralized federated learning platform that successfully trained a CNN model on the MNIST dataset.
arXiv Detail & Related papers (2023-07-19T23:05:49Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Privacy-Preserving Machine Learning for Collaborative Data Sharing via
Auto-encoder Latent Space Embeddings [57.45332961252628]
Privacy-preserving machine learning in data-sharing processes is an ever-critical task.
This paper presents an innovative framework that uses Representation Learning via autoencoders to generate privacy-preserving embedded data.
arXiv Detail & Related papers (2022-11-10T17:36:58Z) - Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy [4.951247283741297]
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model.
We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion.
We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets.
arXiv Detail & Related papers (2022-02-20T19:52:53Z) - A Privacy-Preserving and Trustable Multi-agent Learning Framework [34.28936739262812]
This paper presents Privacy-preserving and trustable Distributed Learning (PT-DL)
PT-DL is a fully decentralized framework that relies on Differential Privacy to guarantee strong privacy protections of the agents' data.
The paper shows that PT-DL is resilient up to a 50% collusion attack, with high probability, in a malicious trust model.
arXiv Detail & Related papers (2021-06-02T15:46:27Z) - Privacy and Trust Redefined in Federated Machine Learning [5.4475482673944455]
We present a privacy-preserving decentralised workflow that facilitates trusted federated learning among participants.
Only entities in possession of Verifiable Credentials issued from the appropriate authorities are able to establish secure, authenticated communication channels.
arXiv Detail & Related papers (2021-03-29T16:47:01Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z) - Differentially Private Secure Multi-Party Computation for Federated
Learning in Financial Applications [5.50791468454604]
Federated learning enables a population of clients, working with a trusted server, to collaboratively learn a shared machine learning model.
This reduces the risk of exposing sensitive data, but it is still possible to reverse engineer information about a client's private data set from communicated model parameters.
We present a privacy-preserving federated learning protocol to a non-specialist audience, demonstrate it using logistic regression on a real-world credit card fraud data set, and evaluate it using an open-source simulation platform.
arXiv Detail & Related papers (2020-10-12T17:16:27Z) - Decentralised Learning from Independent Multi-Domain Labels for Person
Re-Identification [69.29602103582782]
Deep learning has been successful for many computer vision tasks due to the availability of shared and centralised large-scale training data.
However, increasing awareness of privacy concerns poses new challenges to deep learning, especially for person re-identification (Re-ID)
We propose a novel paradigm called Federated Person Re-Identification (FedReID) to construct a generalisable global model (a central server) by simultaneously learning with multiple privacy-preserved local models (local clients)
This client-server collaborative learning process is iteratively performed under privacy control, enabling FedReID to realise decentralised learning without sharing distributed data nor collecting any
arXiv Detail & Related papers (2020-06-07T13:32:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.