EdgeLeakage: Membership Information Leakage in Distributed Edge Intelligence Systems
- URL: http://arxiv.org/abs/2404.16851v1
- Date: Fri, 8 Mar 2024 09:28:39 GMT
- Title: EdgeLeakage: Membership Information Leakage in Distributed Edge Intelligence Systems
- Authors: Kongyang Chen, Yi Lin, Hui Luo, Bing Mi, Yatie Xiao, Chao Ma, Jorge Sá Silva,
- Abstract summary: Decentralized edge nodes aggregate unprocessed data and facilitate data analytics to uphold low transmission latency and real-time data processing capabilities.
Recently, these edge nodes have evolved to facilitate the implementation of distributed machine learning models.
Within the realm of edge intelligence, susceptibility to numerous security and privacy threats against machine learning models becomes evident.
This paper addresses the issue of membership inference leakage in distributed edge intelligence systems.
- Score: 7.825521416085229
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In contemporary edge computing systems, decentralized edge nodes aggregate unprocessed data and facilitate data analytics to uphold low transmission latency and real-time data processing capabilities. Recently, these edge nodes have evolved to facilitate the implementation of distributed machine learning models, utilizing their computational resources to enable intelligent decision-making, thereby giving rise to an emerging domain referred to as edge intelligence. However, within the realm of edge intelligence, susceptibility to numerous security and privacy threats against machine learning models becomes evident. This paper addresses the issue of membership inference leakage in distributed edge intelligence systems. Specifically, our focus is on an autonomous scenario wherein edge nodes collaboratively generate a global model. The utilization of membership inference attacks serves to elucidate the potential data leakage in this particular context. Furthermore, we delve into the examination of several defense mechanisms aimed at mitigating the aforementioned data leakage problem. Experimental results affirm that our approach is effective in detecting data leakage within edge intelligence systems, and the implementation of our defense methods proves instrumental in alleviating this security threat. Consequently, our findings contribute to safeguarding data privacy in the context of edge intelligence systems.
Related papers
- Adversarial Challenges in Network Intrusion Detection Systems: Research Insights and Future Prospects [0.33554367023486936]
This paper provides a comprehensive review of machine learning-based Network Intrusion Detection Systems (NIDS)
We critically examine existing research in NIDS, highlighting key trends, strengths, and limitations.
We discuss emerging challenges in the field and offer insights for the development of more robust and resilient NIDS.
arXiv Detail & Related papers (2024-09-27T13:27:29Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - Distributed Threat Intelligence at the Edge Devices: A Large Language Model-Driven Approach [0.0]
Decentralized threat intelligence on edge devices represents a promising paradigm for enhancing cybersecurity on resource-constrained edge devices.
This approach involves the deployment of lightweight machine learning models directly onto edge devices to analyze local data streams, such as network traffic and system logs, in real-time.
Our proposed framework can improve edge computing security by providing better security in cyber threat detection and mitigation by isolating the edge devices from the network.
arXiv Detail & Related papers (2024-05-14T16:40:37Z) - X-CBA: Explainability Aided CatBoosted Anomal-E for Intrusion Detection System [2.556190321164248]
Using machine learning (ML) and deep learning (DL) models in Intrusion Detection Systems has led to a trust deficit due to their non-transparent decision-making.
This paper introduces a novel Explainable IDS approach, called X-CBA, that leverages the structural advantages of Graph Neural Networks (GNNs) to effectively process network traffic data.
Our approach achieves high accuracy with 99.47% in threat detection and provides clear, actionable explanations of its analytical outcomes.
arXiv Detail & Related papers (2024-02-01T18:29:16Z) - DefectHunter: A Novel LLM-Driven Boosted-Conformer-based Code Vulnerability Detection Mechanism [3.9377491512285157]
DefectHunter is an innovative model for vulnerability identification that employs the Conformer mechanism.
This mechanism fuses self-attention with convolutional networks to capture both local, position-wise features and global, content-based interactions.
arXiv Detail & Related papers (2023-09-27T00:10:29Z) - A Data Quarantine Model to Secure Data in Edge Computing [0.0]
Edge computing provides an agile data processing platform for latency-sensitive and communication-intensive applications.
Data integrity attacks can lead to inconsistent data and intrude edge data analytics.
This paper proposes a new concept of data quarantine model to mitigate data integrity attacks by quarantining intruders.
arXiv Detail & Related papers (2021-11-15T11:04:48Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Identity-Aware Attribute Recognition via Real-Time Distributed Inference
in Mobile Edge Clouds [53.07042574352251]
We design novel models for pedestrian attribute recognition with re-ID in an MEC-enabled camera monitoring system.
We propose a novel inference framework with a set of distributed modules, by jointly considering the attribute recognition and person re-ID.
We then devise a learning-based algorithm for the distributions of the modules of the proposed distributed inference framework.
arXiv Detail & Related papers (2020-08-12T12:03:27Z) - Survey of Network Intrusion Detection Methods from the Perspective of
the Knowledge Discovery in Databases Process [63.75363908696257]
We review the methods that have been applied to network data with the purpose of developing an intrusion detector.
We discuss the techniques used for the capture, preparation and transformation of the data, as well as, the data mining and evaluation methods.
As a result of this literature review, we investigate some open issues which will need to be considered for further research in the area of network security.
arXiv Detail & Related papers (2020-01-27T11:21:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.