Breaking On-Chip Communication Anonymity using Flow Correlation Attacks
- URL: http://arxiv.org/abs/2309.15687v2
- Date: Thu, 1 Feb 2024 19:05:15 GMT
- Title: Breaking On-Chip Communication Anonymity using Flow Correlation Attacks
- Authors: Hansika Weerasena, and Prabhat Mishra
- Abstract summary: We investigate the security strength of existing anonymous routing protocols in Network-on-Chip (NoC) architectures.
We show that the existing anonymous routing is vulnerable to machine learning (ML) based flow correlation attacks on NoCs.
We propose lightweight anonymous routing with traffic obfuscation techniques to defend against ML-based flow correlation attacks.
- Score: 2.977255700811213
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Network-on-Chip (NoC) is widely used to facilitate communication between
components in sophisticated System-on-Chip (SoC) designs. Security of the
on-chip communication is crucial because exploiting any vulnerability in shared
NoC would be a goldmine for an attacker that puts the entire computing
infrastructure at risk. NoC security relies on effective countermeasures
against diverse attacks, including attacks on anonymity. We investigate the
security strength of existing anonymous routing protocols in NoC architectures.
Specifically, this paper makes two important contributions. We show that the
existing anonymous routing is vulnerable to machine learning (ML) based flow
correlation attacks on NoCs. We propose lightweight anonymous routing with
traffic obfuscation techniques to defend against ML-based flow correlation
attacks. Experimental studies using both real and synthetic traffic reveal that
our proposed attack is successful against state-of-the-art anonymous routing in
NoC architectures with high accuracy (up to 99%) for diverse traffic patterns,
while our lightweight countermeasure can defend against ML-based attacks with
minor hardware and performance overhead.
Related papers
- RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation [2.2667044928324747]
Federated learning (FL) allows multiple devices to train a model collaboratively without sharing their data.
Despite its benefits, FL is vulnerable to privacy leakage and poisoning attacks.
We propose a robust federated learning framework against poisoning attacks (RFLPA) based on SecAgg protocol.
arXiv Detail & Related papers (2024-05-24T03:31:10Z) - A Quantum of QUIC: Dissecting Cryptography with Post-Quantum Insights [2.522402937703098]
QUIC is a new network protocol standardized in 2021.
It was designed to replace the TCP/TLS stack and is based on UDP.
This paper presents a detailed evaluation of the impact of cryptography on QUIC performance.
arXiv Detail & Related papers (2024-05-15T11:27:28Z) - SISSA: Real-time Monitoring of Hardware Functional Safety and
Cybersecurity with In-vehicle SOME/IP Ethernet Traffic [49.549771439609046]
We propose SISSA, a SOME/IP communication traffic-based approach for modeling and analyzing in-vehicle functional safety and cyber security.
Specifically, SISSA models hardware failures with the Weibull distribution and addresses five potential attacks on SOME/IP communication.
Extensive experimental results show the effectiveness and efficiency of SISSA.
arXiv Detail & Related papers (2024-02-21T03:31:40Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Unscrambling the Rectification of Adversarial Attacks Transferability
across Computer Networks [4.576324217026666]
Convolutional neural networks (CNNs) models play a vital role in achieving state-of-the-art performances.
CNNs can be compromised because of their susceptibility to adversarial attacks.
We present a novel and comprehensive method to improve the strength of attacks and assess the transferability of adversarial examples in CNNs.
arXiv Detail & Related papers (2023-10-26T22:36:24Z) - Prevention of cyberattacks in WSN and packet drop by CI framework and
information processing protocol using AI and Big Data [0.0]
This study integrates a cognitive intelligence (CI) framework, an information processing protocol, and sophisticated artificial intelligence (AI) and big data analytics approaches.
The framework is capable of detecting and preventing several forms of assaults, including as denial-of-service (DoS) attacks, node compromise, and data tampering.
It is highly resilient to packet drop occurrences, which improves the WSN's overall reliability and performance.
arXiv Detail & Related papers (2023-06-15T19:00:39Z) - Efficient and Low Overhead Website Fingerprinting Attacks and Defenses
based on TCP/IP Traffic [16.6602652644935]
Website fingerprinting attacks based on machine learning and deep learning tend to use the most typical features to achieve a satisfactory performance of attacking rate.
To defend against such attacks, random packet defense (RPD) with a high cost of excessive network overhead is usually applied.
We propose a filter-assisted attack against RPD, which can filter out the injected noises using the statistical characteristics of TCP/IP traffic.
We further improve the list-based defense by a traffic splitting mechanism, which can combat the mentioned attacks as well as save a considerable amount of network overhead.
arXiv Detail & Related papers (2023-02-27T13:45:15Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.