Design for Trust utilizing Rareness Reduction
- URL: http://arxiv.org/abs/2302.08984v2
- Date: Wed, 17 Apr 2024 00:54:51 GMT
- Title: Design for Trust utilizing Rareness Reduction
- Authors: Aruna Jayasena, Prabhat Mishra,
- Abstract summary: This paper investigates rareness reduction as a design-for-trust solution to make it harder for an adversary to hide Trojans.
It also reveals that reducing rareness leads to faster Trojan detection as well as improved coverage by Trojan detection methods.
- Score: 2.977255700811213
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Increasing design complexity and reduced time-to-market have motivated manufacturers to outsource some parts of the System-on-Chip (SoC) design flow to third-party vendors. This provides an opportunity for attackers to introduce hardware Trojans by constructing stealthy triggers consisting of rare events (e.g., rare signals, states, and transitions). There are promising test generation-based hardware Trojan detection techniques that rely on the activation of rare events. In this paper, we investigate rareness reduction as a design-for-trust solution to make it harder for an adversary to hide Trojans (easier for Trojan detection). Specifically, we analyze different avenues to reduce the potential rare trigger cases, including design diversity and area optimization. While there is a good understanding of the relationship between area, power, energy, and performance, this research provides a better insight into the dependency between area and security. Our experimental evaluation demonstrates that area reduction leads to a reduction in rareness. It also reveals that reducing rareness leads to faster Trojan detection as well as improved coverage by Trojan detection methods.
Related papers
- Uncertainty-Aware Hardware Trojan Detection Using Multimodal Deep
Learning [3.118371710802894]
The risk of hardware Trojans being inserted at various stages of chip production has increased in a zero-trust fabless era.
We propose a multimodal deep learning approach to detect hardware Trojans and evaluate the results from both early fusion and late fusion strategies.
arXiv Detail & Related papers (2024-01-15T05:45:51Z) - Design for Assurance: Employing Functional Verification Tools for Thwarting Hardware Trojan Threat in 3PIPs [13.216074408064117]
Third-party intellectual property cores are essential building blocks of modern system-on-chip and integrated circuit designs.
These design components usually come from vendors of different trust levels and may contain undocumented design functionality.
We develop a method for identifying and preventing hardware Trojans, employing functional verification tools and languages familiar to hardware designers.
arXiv Detail & Related papers (2023-11-21T03:32:07Z) - Logic Locking based Trojans: A Friend Turns Foe [4.09675763028423]
A common structure in many logic locking techniques has desirable properties of hardware Trojans (HWT)
We then construct a novel type of HWT, called Trojans based on Logic Locking (TroLL), in a way that can evade state-of-the-art ATPG-based HWT detection techniques.
arXiv Detail & Related papers (2023-09-26T16:55:42Z) - Evil from Within: Machine Learning Backdoors through Hardware Trojans [72.99519529521919]
Backdoors pose a serious threat to machine learning, as they can compromise the integrity of security-critical systems, such as self-driving cars.
We introduce a backdoor attack that completely resides within a common hardware accelerator for machine learning.
We demonstrate the practical feasibility of our attack by implanting our hardware trojan into the Xilinx Vitis AI DPU.
arXiv Detail & Related papers (2023-04-17T16:24:48Z) - Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips [51.17948837118876]
We present hardly perceptible Trojan attack (HPT)
HPT crafts hardly perceptible Trojan images by utilizing the additive noise and per pixel flow field.
To achieve superior attack performance, we propose to jointly optimize bit flips, additive noise, and flow field.
arXiv Detail & Related papers (2022-07-27T09:56:17Z) - Game of Trojans: A Submodular Byzantine Approach [9.512062990461212]
We provide an analytical characterization of adversarial capability and strategic interactions between the adversary and detection mechanism.
We propose a Submodular Trojan algorithm to determine the minimal fraction of samples to inject a Trojan trigger.
We show that the adversary wins the game with probability one, thus bypassing detection.
arXiv Detail & Related papers (2022-07-13T03:12:26Z) - Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free [126.15842954405929]
Trojan attacks threaten deep neural networks (DNNs) by poisoning them to behave normally on most samples, yet to produce manipulated results for inputs attached with a trigger.
We propose a novel Trojan network detection regime: first locating a "winning Trojan lottery ticket" which preserves nearly full Trojan information yet only chance-level performance on clean inputs; then recovering the trigger embedded in this already isolated subnetwork.
arXiv Detail & Related papers (2022-05-24T06:33:31Z) - Trigger Hunting with a Topological Prior for Trojan Detection [16.376009231934884]
This paper tackles the problem of Trojan detection, namely, identifying Trojaned models.
One popular approach is reverse engineering, recovering the triggers on a clean image by manipulating the model's prediction.
One major challenge of reverse engineering approach is the enormous search space of triggers.
We propose innovative priors such as diversity and topological simplicity to not only increase the chances of finding the appropriate triggers but also improve the quality of the found triggers.
arXiv Detail & Related papers (2021-10-15T19:47:00Z) - Odyssey: Creation, Analysis and Detection of Trojan Models [91.13959405645959]
Trojan attacks interfere with the training pipeline by inserting triggers into some of the training samples and trains the model to act maliciously only for samples that contain the trigger.
Existing Trojan detectors make strong assumptions about the types of triggers and attacks.
We propose a detector that is based on the analysis of the intrinsic properties; that are affected due to the Trojaning process.
arXiv Detail & Related papers (2020-07-16T06:55:00Z) - An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
Networks [59.42357806777537]
trojan attack aims to attack deployed deep neural networks (DNNs) relying on hidden trigger patterns inserted by hackers.
We propose a training-free attack approach which is different from previous work, in which trojaned behaviors are injected by retraining model on a poisoned dataset.
The proposed TrojanNet has several nice properties including (1) it activates by tiny trigger patterns and keeps silent for other signals, (2) it is model-agnostic and could be injected into most DNNs, dramatically expanding its attack scenarios, and (3) the training-free mechanism saves massive training efforts compared to conventional trojan attack methods.
arXiv Detail & Related papers (2020-06-15T04:58:28Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.