Late Breaking Results: On the One-Key Premise of Logic Locking
- URL: http://arxiv.org/abs/2408.12690v1
- Date: Thu, 22 Aug 2024 19:05:13 GMT
- Title: Late Breaking Results: On the One-Key Premise of Logic Locking
- Authors: Yinghua Hu, Hari Cherupalli, Mike Borza, Deepak Sherlekar,
- Abstract summary: A locking technique is deemed secure if it resists a good array of attacks aimed at finding this correct key.
This paper challenges this one-key premise by introducing a more efficient attack methodology.
Our attack achieves a runtime reduction of up to 99.6% compared to the conventional attack that tries to find a single correct key.
- Score: 0.40980625270164805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The evaluation of logic locking methods has long been predicated on an implicit assumption that only the correct key can unveil the true functionality of a protected circuit. Consequently, a locking technique is deemed secure if it resists a good array of attacks aimed at finding this correct key. This paper challenges this one-key premise by introducing a more efficient attack methodology, focused not on identifying that one correct key, but on finding multiple, potentially incorrect keys that can collectively produce correct functionality from the protected circuit. The tasks of finding these keys can be parallelized, which is well suited for multi-core computing environments. Empirical results show our attack achieves a runtime reduction of up to 99.6% compared to the conventional attack that tries to find a single correct key.
Related papers
- Cute-Lock: Behavioral and Structural Multi-Key Logic Locking Using Time Base Keys [1.104960878651584]
We propose, implement and evaluate a family of secure multi-key logic locking algorithms called Cute-Lock.
Our experimental results under a diverse range of attacks confirm that, compared to vulnerable state-of-the-art methods, employing the Cute-Lock family drives attacking attempts to a dead end without additional overhead.
arXiv Detail & Related papers (2025-01-29T03:44:55Z) - K-Gate Lock: Multi-Key Logic Locking Using Input Encoding Against Oracle-Guided Attacks [1.104960878651584]
K-Gate Lock encodes input patterns using multiple keys that are applied to one set of key inputs at different operational times.
Uses multiple keys will make the circuit secure against oracle-guided attacks and increase attacker efforts to an exponentially time-consuming brute force search.
arXiv Detail & Related papers (2025-01-03T22:07:38Z) - BiCert: A Bilinear Mixed Integer Programming Formulation for Precise Certified Bounds Against Data Poisoning Attacks [62.897993591443594]
Data poisoning attacks pose one of the biggest threats to modern AI systems.
Data poisoning attacks pose one of the biggest threats to modern AI systems.
Data poisoning attacks pose one of the biggest threats to modern AI systems.
arXiv Detail & Related papers (2024-12-13T14:56:39Z) - Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models [79.76293901420146]
Large Language Models (LLMs) are employed across various high-stakes domains, where the reliability of their outputs is crucial.
Our research investigates the fragility of uncertainty estimation and explores potential attacks.
We demonstrate that an attacker can embed a backdoor in LLMs, which, when activated by a specific trigger in the input, manipulates the model's uncertainty without affecting the final output.
arXiv Detail & Related papers (2024-07-15T23:41:11Z) - SubLock: Sub-Circuit Replacement based Input Dependent Key-based Logic Locking for Robust IP Protection [1.804933160047171]
Existing logic locking techniques are vulnerable to SAT-based attacks.
Several SAT-resistant logic locking methods are reported; they require significant overhead.
This paper proposes a novel input dependent key-based logic locking (IDKLL) that effectively prevents SAT-based attacks with low overhead.
arXiv Detail & Related papers (2024-06-27T11:17:06Z) - DECOR: Enhancing Logic Locking Against Machine Learning-Based Attacks [0.6131022957085439]
Logic locking (LL) has gained attention as a promising intellectual property protection measure for integrated circuits.
Recent attacks, facilitated by machine learning (ML), have shown the potential to predict the correct key in multiple LL schemes.
This paper presents a generic LL enhancement method based on a randomized algorithm that can significantly decrease the correlation between locked circuit netlist and correct key values in an LL scheme.
arXiv Detail & Related papers (2024-03-04T07:31:23Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Recovering AES Keys with a Deep Cold Boot Attack [91.22679787578438]
Cold boot attacks inspect the corrupted random access memory soon after the power has been shut down.
In this work, we combine a novel cryptographic variant of a deep error correcting code technique with a modified SAT solver scheme to apply the attack on AES keys.
Our results show that our methods outperform the state of the art attack methods by a very large margin.
arXiv Detail & Related papers (2021-06-09T07:57:01Z) - Challenging the Security of Logic Locking Schemes in the Era of Deep
Learning: A Neuroevolutionary Approach [0.2982610402087727]
Deep learning is being introduced in the domain of logic locking.
We present SnapShot: a novel attack on logic locking that is the first of its kind to utilize artificial neural networks.
We show that SnapShot achieves an average key prediction accuracy of 82.60% for the selected attack scenario.
arXiv Detail & Related papers (2020-11-20T13:03:19Z) - Targeted Attack for Deep Hashing based Retrieval [57.582221494035856]
We propose a novel method, dubbed deep hashing targeted attack (DHTA), to study the targeted attack on such retrieval.
We first formulate the targeted attack as a point-to-set optimization, which minimizes the average distance between the hash code of an adversarial example and those of a set of objects with the target label.
To balance the performance and perceptibility, we propose to minimize the Hamming distance between the hash code of the adversarial example and the anchor code under the $ellinfty$ restriction on the perturbation.
arXiv Detail & Related papers (2020-04-15T08:36:58Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.