A Blockchain-based Model for Securing Data Pipeline in a Heterogeneous
Information System
- URL: http://arxiv.org/abs/2401.09240v1
- Date: Wed, 17 Jan 2024 14:40:09 GMT
- Title: A Blockchain-based Model for Securing Data Pipeline in a Heterogeneous
Information System
- Authors: MN Ramahlosi, Y Madani, A Akanbi
- Abstract summary: This article presents a blockchain-based model for securing data pipelines in a heterogeneous information system.
The model is designed to ensure data integrity, confidentiality, and authenticity in a decentralized manner.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In our digital world, access to personal and public data has become an item
of concern, with challenging security and privacy aspects. Modern information
systems are heterogeneous in nature and have an inherent security
vulnerability, which is susceptible to data interception and data modification
due to unsecured communication data pipelines between connected endpoints. This
re-search article presents a blockchain-based model for securing data pipelines
in a heterogeneous information system using an integrated multi-hazard early
warning system (MHEWS) as a case study. The proposed model utilizes the
inherent security features of blockchain technology to address the security and
privacy concerns that arise in data pipelines. The model is designed to ensure
data integrity, confidentiality, and authenticity in a decentralized manner.
The model is evaluated in a hybrid environment using a prototype implementation
and simulation experiments with outcomes that demonstrate advantages over
traditional approaches for a tamper-proof and immutable data pipeline for data
authenticity and integrity using a confidential ledger.
Related papers
- Balancing Security and Accuracy: A Novel Federated Learning Approach for Cyberattack Detection in Blockchain Networks [10.25938198121523]
This paper presents a novel Collaborative Cyberattack Detection (CCD) system aimed at enhancing the security of blockchain-based data-sharing networks.
We explore the effects of various noise types on key performance metrics, including attack detection accuracy, deep learning model convergence time, and the overall runtime of global model generation.
Our findings reveal the intricate trade-offs between ensuring data privacy and maintaining system performance, offering valuable insights into optimizing these parameters for diverse CCD environments.
arXiv Detail & Related papers (2024-09-08T04:38:07Z) - Complete Security and Privacy for AI Inference in Decentralized Systems [14.526663289437584]
Large models are crucial for tasks like diagnosing diseases but tend to be delicate and not very scalable.
Nesa solves these challenges with a comprehensive framework using multiple techniques to protect data and model outputs.
Nesa's state-of-the-art proofs and principles demonstrate the framework's effectiveness.
arXiv Detail & Related papers (2024-07-28T05:09:17Z) - Security Approaches for Data Provenance in the Internet of Things: A Systematic Literature Review [0.0]
Internet of Things systems are vulnerable to security attacks.
Data provenance offers a way to record the origin, history, and handling of data to address these vulnerabilities.
arXiv Detail & Related papers (2024-07-03T19:25:36Z) - Marking the Pace: A Blockchain-Enhanced Privacy-Traceable Strategy for Federated Recommender Systems [11.544642210389894]
Federated recommender systems have been enhanced through data sharing and continuous model updates.
Given the sensitivity of IoT data, transparent data processing in data sharing and model updates is paramount.
Existing methods fall short in tracing the flow of shared data and the evolution of model updates.
We present LIBERATE, a privacy-traceable federated recommender system.
arXiv Detail & Related papers (2024-06-07T07:21:21Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Decentralised, Scalable and Privacy-Preserving Synthetic Data Generation [8.982917734231165]
We build a novel system that allows the contributors of real data to autonomously participate in differentially private synthetic data generation.
Our solution is based on three building blocks namely: Solid (Social Linked Data), MPC (Secure Multi-Party Computation) and Trusted Execution Environments (TEEs)
We show how these three technologies can be effectively used to address various challenges in responsible and trustworthy synthetic data generation.
arXiv Detail & Related papers (2023-10-30T22:27:32Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - Private Set Generation with Discriminative Information [63.851085173614]
Differentially private data generation is a promising solution to the data privacy challenge.
Existing private generative models are struggling with the utility of synthetic samples.
We introduce a simple yet effective method that greatly improves the sample utility of state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-07T10:02:55Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.