Late Breaking Results: On the One-Key Premise of Logic Locking
- URL: http://arxiv.org/abs/2408.12690v1
- Date: Thu, 22 Aug 2024 19:05:13 GMT
- Title: Late Breaking Results: On the One-Key Premise of Logic Locking
- Authors: Yinghua Hu, Hari Cherupalli, Mike Borza, Deepak Sherlekar,
- Abstract summary: A locking technique is deemed secure if it resists a good array of attacks aimed at finding this correct key.
This paper challenges this one-key premise by introducing a more efficient attack methodology.
Our attack achieves a runtime reduction of up to 99.6% compared to the conventional attack that tries to find a single correct key.
- Score: 0.40980625270164805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The evaluation of logic locking methods has long been predicated on an implicit assumption that only the correct key can unveil the true functionality of a protected circuit. Consequently, a locking technique is deemed secure if it resists a good array of attacks aimed at finding this correct key. This paper challenges this one-key premise by introducing a more efficient attack methodology, focused not on identifying that one correct key, but on finding multiple, potentially incorrect keys that can collectively produce correct functionality from the protected circuit. The tasks of finding these keys can be parallelized, which is well suited for multi-core computing environments. Empirical results show our attack achieves a runtime reduction of up to 99.6% compared to the conventional attack that tries to find a single correct key.
Related papers
- Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models [79.76293901420146]
Large Language Models (LLMs) are employed across various high-stakes domains, where the reliability of their outputs is crucial.
Our research investigates the fragility of uncertainty estimation and explores potential attacks.
We demonstrate that an attacker can embed a backdoor in LLMs, which, when activated by a specific trigger in the input, manipulates the model's uncertainty without affecting the final output.
arXiv Detail & Related papers (2024-07-15T23:41:11Z) - SubLock: Sub-Circuit Replacement based Input Dependent Key-based Logic Locking for Robust IP Protection [1.804933160047171]
Existing logic locking techniques are vulnerable to SAT-based attacks.
Several SAT-resistant logic locking methods are reported; they require significant overhead.
This paper proposes a novel input dependent key-based logic locking (IDKLL) that effectively prevents SAT-based attacks with low overhead.
arXiv Detail & Related papers (2024-06-27T11:17:06Z) - DECOR: Enhancing Logic Locking Against Machine Learning-Based Attacks [0.6131022957085439]
Logic locking (LL) has gained attention as a promising intellectual property protection measure for integrated circuits.
Recent attacks, facilitated by machine learning (ML), have shown the potential to predict the correct key in multiple LL schemes.
This paper presents a generic LL enhancement method based on a randomized algorithm that can significantly decrease the correlation between locked circuit netlist and correct key values in an LL scheme.
arXiv Detail & Related papers (2024-03-04T07:31:23Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Upper bounds on key rates in device-independent quantum key distribution
based on convex-combination attacks [1.118478900782898]
We present the convex-combination attack as an efficient, easy-to-use technique for upper-bounding DIQKD key rates.
It allows verifying the accuracy of lower bounds on key rates for state-of-the-art protocols.
arXiv Detail & Related papers (2022-06-13T15:27:48Z) - Recovering AES Keys with a Deep Cold Boot Attack [91.22679787578438]
Cold boot attacks inspect the corrupted random access memory soon after the power has been shut down.
In this work, we combine a novel cryptographic variant of a deep error correcting code technique with a modified SAT solver scheme to apply the attack on AES keys.
Our results show that our methods outperform the state of the art attack methods by a very large margin.
arXiv Detail & Related papers (2021-06-09T07:57:01Z) - A new method controlling the error probability for detecting the
photon-number-splitting attack in the decoy-state quantum key distribution [9.081501177870487]
In this paper, under the null hypothesis of no PNS attack, we first determine whether there is an attack or not.
If the result is judged to be an attack, we can use the existing decoy-state method and the GLLP formula to estimate secure key rate.
Otherwise, all pulses received including both single-photon pulses and multiphoton pulses, can be used to generate the keys.
arXiv Detail & Related papers (2020-12-21T14:38:09Z) - Challenging the Security of Logic Locking Schemes in the Era of Deep
Learning: A Neuroevolutionary Approach [0.2982610402087727]
Deep learning is being introduced in the domain of logic locking.
We present SnapShot: a novel attack on logic locking that is the first of its kind to utilize artificial neural networks.
We show that SnapShot achieves an average key prediction accuracy of 82.60% for the selected attack scenario.
arXiv Detail & Related papers (2020-11-20T13:03:19Z) - Targeted Attack for Deep Hashing based Retrieval [57.582221494035856]
We propose a novel method, dubbed deep hashing targeted attack (DHTA), to study the targeted attack on such retrieval.
We first formulate the targeted attack as a point-to-set optimization, which minimizes the average distance between the hash code of an adversarial example and those of a set of objects with the target label.
To balance the performance and perceptibility, we propose to minimize the Hamming distance between the hash code of the adversarial example and the anchor code under the $ellinfty$ restriction on the perturbation.
arXiv Detail & Related papers (2020-04-15T08:36:58Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.