A Data Quarantine Model to Secure Data in Edge Computing
- URL: http://arxiv.org/abs/2111.07672v1
- Date: Mon, 15 Nov 2021 11:04:48 GMT
- Title: A Data Quarantine Model to Secure Data in Edge Computing
- Authors: Poornima Mahadevappa, Raja Kumar Murugesan
- Abstract summary: Edge computing provides an agile data processing platform for latency-sensitive and communication-intensive applications.
Data integrity attacks can lead to inconsistent data and intrude edge data analytics.
This paper proposes a new concept of data quarantine model to mitigate data integrity attacks by quarantining intruders.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Edge computing provides an agile data processing platform for
latency-sensitive and communication-intensive applications through a
decentralized cloud and geographically distributed edge nodes. Gaining
centralized control over the edge nodes can be challenging due to security
issues and threats. Among several security issues, data integrity attacks can
lead to inconsistent data and intrude edge data analytics. Further
intensification of the attack makes it challenging to mitigate and identify the
root cause. Therefore, this paper proposes a new concept of data quarantine
model to mitigate data integrity attacks by quarantining intruders. The
efficient security solutions in cloud, ad-hoc networks, and computer systems
using quarantine have motivated adopting it in edge computing. The data
acquisition edge nodes identify the intruders and quarantine all the suspected
devices through dimensionality reduction. During quarantine, the proposed
concept builds the reputation scores to determine the falsely identified
legitimate devices and sanitize their affected data to regain data integrity.
As a preliminary investigation, this work identifies an appropriate machine
learning method, Linear Discriminant Analysis (LDA), for dimensionality
reduction. The LDA results in 72.83% quarantine accuracy and 0.9 seconds
training time, which is efficient than other state-of-the-art methods. In
future, this would be implemented and validated with ground truth data.
Related papers
- Complete Security and Privacy for AI Inference in Decentralized Systems [14.526663289437584]
Large models are crucial for tasks like diagnosing diseases but tend to be delicate and not very scalable.
Nesa solves these challenges with a comprehensive framework using multiple techniques to protect data and model outputs.
Nesa's state-of-the-art proofs and principles demonstrate the framework's effectiveness.
arXiv Detail & Related papers (2024-07-28T05:09:17Z) - EdgeLeakage: Membership Information Leakage in Distributed Edge Intelligence Systems [7.825521416085229]
Decentralized edge nodes aggregate unprocessed data and facilitate data analytics to uphold low transmission latency and real-time data processing capabilities.
Recently, these edge nodes have evolved to facilitate the implementation of distributed machine learning models.
Within the realm of edge intelligence, susceptibility to numerous security and privacy threats against machine learning models becomes evident.
This paper addresses the issue of membership inference leakage in distributed edge intelligence systems.
arXiv Detail & Related papers (2024-03-08T09:28:39Z) - DAD++: Improved Data-free Test Time Adversarial Defense [12.606555446261668]
We propose a test time Data-free Adversarial Defense (DAD) containing detection and correction frameworks.
We conduct a wide range of experiments and ablations on several datasets and network architectures to show the efficacy of our proposed approach.
Our DAD++ gives an impressive performance against various adversarial attacks with a minimal drop in clean accuracy.
arXiv Detail & Related papers (2023-09-10T20:39:53Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - FedCC: Robust Federated Learning against Model Poisoning Attacks [0.0]
Federated Learning is designed to address privacy concerns in learning models.
New distributed paradigm safeguards data privacy but differentiates the attack surface due to the server's inaccessibility to local datasets.
arXiv Detail & Related papers (2022-12-05T01:52:32Z) - Try to Avoid Attacks: A Federated Data Sanitization Defense for
Healthcare IoMT Systems [4.024567343465081]
The distribution of IoMT has the risk of protection from data poisoning attacks.
Poisoned data can be fabricated by falsifying medical data.
This paper introduces a Federated Data Sanitization Defense, a novel approach to protect the system from data poisoning attacks.
arXiv Detail & Related papers (2022-11-03T05:21:39Z) - Black-box Dataset Ownership Verification via Backdoor Watermarking [67.69308278379957]
We formulate the protection of released datasets as verifying whether they are adopted for training a (suspicious) third-party model.
We propose to embed external patterns via backdoor watermarking for the ownership verification to protect them.
Specifically, we exploit poison-only backdoor attacks ($e.g.$, BadNets) for dataset watermarking and design a hypothesis-test-guided method for dataset verification.
arXiv Detail & Related papers (2022-08-04T05:32:20Z) - Autoregressive Perturbations for Data Poisoning [54.205200221427994]
Data scraping from social media has led to growing concerns regarding unauthorized use of data.
Data poisoning attacks have been proposed as a bulwark against scraping.
We introduce autoregressive (AR) poisoning, a method that can generate poisoned data without access to the broader dataset.
arXiv Detail & Related papers (2022-06-08T06:24:51Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.