Uncertainty-Aware Hardware Trojan Detection Using Multimodal Deep
Learning
- URL: http://arxiv.org/abs/2401.09479v2
- Date: Tue, 23 Jan 2024 07:04:18 GMT
- Title: Uncertainty-Aware Hardware Trojan Detection Using Multimodal Deep
Learning
- Authors: Rahul Vishwakarma, Amin Rezaei
- Abstract summary: The risk of hardware Trojans being inserted at various stages of chip production has increased in a zero-trust fabless era.
We propose a multimodal deep learning approach to detect hardware Trojans and evaluate the results from both early fusion and late fusion strategies.
- Score: 3.118371710802894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The risk of hardware Trojans being inserted at various stages of chip
production has increased in a zero-trust fabless era. To counter this, various
machine learning solutions have been developed for the detection of hardware
Trojans. While most of the focus has been on either a statistical or deep
learning approach, the limited number of Trojan-infected benchmarks affects the
detection accuracy and restricts the possibility of detecting zero-day Trojans.
To close the gap, we first employ generative adversarial networks to amplify
our data in two alternative representation modalities, a graph and a tabular,
ensuring that the dataset is distributed in a representative manner. Further,
we propose a multimodal deep learning approach to detect hardware Trojans and
evaluate the results from both early fusion and late fusion strategies. We also
estimate the uncertainty quantification metrics of each prediction for
risk-aware decision-making. The outcomes not only confirms the efficacy of our
proposed hardware Trojan detection method but also opens a new door for future
studies employing multimodality and uncertainty quantification to address other
hardware security challenges.
Related papers
- Risk-Aware and Explainable Framework for Ensuring Guaranteed Coverage in Evolving Hardware Trojan Detection [2.6396287656676733]
In high-risk and sensitive domain, we cannot accept even a small misclassification.
In this paper, we generate evolving hardware Trojans using our proposed novel conformalized generative adversarial networks.
The proposed approach has been validated on both synthetic and real chip-level benchmarks.
arXiv Detail & Related papers (2023-10-14T03:30:21Z) - TrojanNet: Detecting Trojans in Quantum Circuits using Machine Learning [5.444459446244819]
TrojanNet is a novel approach to enhance the security of quantum circuits by detecting and classifying Trojan-inserted circuits.
We generate 12 diverse datasets by introducing variations in Trojan gate types, the number of gates, insertion locations, and compilers.
Experimental results showcase an average accuracy of 98.80% and an average F1-score of 98.53% in effectively detecting and classifying Trojan-inserted QAOA circuits.
arXiv Detail & Related papers (2023-06-29T05:56:05Z) - Evil from Within: Machine Learning Backdoors through Hardware Trojans [72.99519529521919]
Backdoors pose a serious threat to machine learning, as they can compromise the integrity of security-critical systems, such as self-driving cars.
We introduce a backdoor attack that completely resides within a common hardware accelerator for machine learning.
We demonstrate the practical feasibility of our attack by implanting our hardware trojan into the Xilinx Vitis AI DPU.
arXiv Detail & Related papers (2023-04-17T16:24:48Z) - A survey on hardware-based malware detection approaches [45.24207460381396]
Hardware-based malware detection approaches leverage hardware performance counters and machine learning prowess.
We meticulously analyze the approach, unraveling the most common methods, algorithms, tools, and datasets that shape its contours.
The discussion extends to crafting mixed hardware and software approaches for collaborative efficacy, essential enhancements in hardware monitoring units, and a better understanding of the correlation between hardware events and malware applications.
arXiv Detail & Related papers (2023-03-22T13:00:41Z) - Design for Trust utilizing Rareness Reduction [2.977255700811213]
This paper investigates rareness reduction as a design-for-trust solution to make it harder for an adversary to hide Trojans.
It also reveals that reducing rareness leads to faster Trojan detection as well as improved coverage by Trojan detection methods.
arXiv Detail & Related papers (2023-02-17T16:42:11Z) - Third-Party Hardware IP Assurance against Trojans through Supervised
Learning and Post-processing [3.389624476049805]
VIPR is a systematic machine learning (ML) based trust verification solution for 3PIPs.
We present a comprehensive framework, associated algorithms, and a tool flow for obtaining an optimal set of features.
The proposed post-processing algorithms reduce false positives by up to 92.85%.
arXiv Detail & Related papers (2021-11-29T21:04:53Z) - Robust and Transferable Anomaly Detection in Log Data using Pre-Trained
Language Models [59.04636530383049]
Anomalies or failures in large computer systems, such as the cloud, have an impact on a large number of users.
We propose a framework for anomaly detection in log data, as a major troubleshooting source of system information.
arXiv Detail & Related papers (2021-02-23T09:17:05Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Practical Detection of Trojan Neural Networks: Data-Limited and
Data-Free Cases [87.69818690239627]
We study the problem of the Trojan network (TrojanNet) detection in the data-scarce regime.
We propose a data-limited TrojanNet detector (TND), when only a few data samples are available for TrojanNet detection.
In addition, we propose a data-free TND, which can detect a TrojanNet without accessing any data samples.
arXiv Detail & Related papers (2020-07-31T02:00:38Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.