Designing with Deception: ML- and Covert Gate-Enhanced Camouflaging to Thwart IC Reverse Engineering
- URL: http://arxiv.org/abs/2508.08462v1
- Date: Mon, 11 Aug 2025 20:40:42 GMT
- Title: Designing with Deception: ML- and Covert Gate-Enhanced Camouflaging to Thwart IC Reverse Engineering
- Authors: Junling Fan, David Koblah, Domenic Forte,
- Abstract summary: Integrated circuits (ICs) are essential to modern electronic systems, yet they face significant risks from physical reverse engineering (RE) attacks.<n>We present a machine learning-driven methodology that integrates cryptic and mimetic cyber deception principles to enhance IC security against RE.<n>Our work sets a new standard for IC camouflage, advancing the application of cyber deception principles to protect critical systems from adversarial threats.
- Score: 2.6217304977339473
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Integrated circuits (ICs) are essential to modern electronic systems, yet they face significant risks from physical reverse engineering (RE) attacks that compromise intellectual property (IP) and overall system security. While IC camouflage techniques have emerged to mitigate these risks, existing approaches largely focus on localized gate modifications, neglecting comprehensive deception strategies. To address this gap, we present a machine learning (ML)-driven methodology that integrates cryptic and mimetic cyber deception principles to enhance IC security against RE. Our approach leverages a novel And-Inverter Graph Variational Autoencoder (AIG-VAE) to encode circuit representations, enabling dual-layered camouflage through functional preservation and appearance mimicry. By introducing new variants of covert gates -- Fake Inverters, Fake Buffers, and Universal Transmitters -- our methodology achieves robust protection by obscuring circuit functionality while presenting misleading appearances. Experimental results demonstrate the effectiveness of our strategy in maintaining circuit functionality while achieving high camouflage and similarity scores with minimal structural overhead. Additionally, we validate the robustness of our method against advanced artificial intelligence (AI)-enhanced RE attacks, highlighting its practical applicability in securing IC designs. By bridging the gap in mimetic deception for hardware security, our work sets a new standard for IC camouflage, advancing the application of cyber deception principles to protect critical systems from adversarial threats.
Related papers
- Secure Semantic Communications via AI Defenses: Fundamentals, Solutions, and Future Directions [44.71660423560587]
This survey provides a defense-centered and system-oriented synthesis of security in SemCom via AI defense.<n>We present a structured taxonomy of defense strategies organized by where semantic integrity can be compromised in SemCom systems.<n>We also examine security utility operating envelopes that capture tradeoffs among semantic fidelity, robustness, latency, and energy.
arXiv Detail & Related papers (2026-02-25T17:28:07Z) - NuRedact: Non-Uniform eFPGA Architecture for Low-Overhead and Secure IP Redaction [0.0]
This paper introduces NuRedact, the first full-custom eFPGA redaction framework that embraces architectural non-uniformity to balance security and efficiency.<n>From a security perspective, NuRedact fabrics are evaluated against state-of-the-art attack models, including SAT-based, cyclic, and sequential variants, and show enhanced resilience while maintaining practical design overheads.
arXiv Detail & Related papers (2026-01-16T20:55:30Z) - Logic Encryption: This Time for Real [7.880593659618423]
We present a novel approach for IP protection based on logic encryption (LE)<n>Unlike established schemes for logic locking, our work obfuscates the circuit's structure and functionality by encoding and encrypting the logic itself.
arXiv Detail & Related papers (2025-11-30T11:04:01Z) - Secure Tug-of-War (SecTOW): Iterative Defense-Attack Training with Reinforcement Learning for Multimodal Model Security [63.41350337821108]
We propose Secure Tug-of-War (SecTOW) to enhance the security of multimodal large language models (MLLMs)<n>SecTOW consists of two modules: a defender and an auxiliary attacker, both trained iteratively using reinforcement learning (GRPO)<n>We show that SecTOW significantly improves security while preserving general performance.
arXiv Detail & Related papers (2025-07-29T17:39:48Z) - A Survey on Autonomy-Induced Security Risks in Large Model-Based Agents [45.53643260046778]
Recent advances in large language models (LLMs) have catalyzed the rise of autonomous AI agents.<n>These large-model agents mark a paradigm shift from static inference systems to interactive, memory-augmented entities.
arXiv Detail & Related papers (2025-06-30T13:34:34Z) - Vision Transformer with Adversarial Indicator Token against Adversarial Attacks in Radio Signal Classifications [33.246218531386326]
We propose a novel vision transformer (ViT) architecture by introducing a new concept known as adversarial indicator (AdvI) token to detect adversarial attacks.<n>We show the proposed AdvI token acts as a crucial element within the ViT, influencing attention weights and thereby highlighting regions or features in the input data that are potentially suspicious or anomalous.
arXiv Detail & Related papers (2025-06-13T15:21:54Z) - PICO: Secure Transformers via Robust Prompt Isolation and Cybersecurity Oversight [0.0]
We propose a robust transformer architecture designed to prevent prompt injection attacks.<n>Our PICO framework structurally separates trusted system instructions from untrusted user inputs.<n>We incorporate a specialized Security Expert Agent within a Mixture-of-Experts framework.
arXiv Detail & Related papers (2025-04-26T00:46:13Z) - An Approach to Technical AGI Safety and Security [72.83728459135101]
We develop an approach to address the risk of harms consequential enough to significantly harm humanity.<n>We focus on technical approaches to misuse and misalignment.<n>We briefly outline how these ingredients could be combined to produce safety cases for AGI systems.
arXiv Detail & Related papers (2025-04-02T15:59:31Z) - REFINE: Inversion-Free Backdoor Defense via Model Reprogramming [60.554146386198376]
Backdoor attacks on deep neural networks (DNNs) have emerged as a significant security threat.<n>We propose REFINE, an inversion-free backdoor defense method based on model reprogramming.
arXiv Detail & Related papers (2025-02-22T07:29:12Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - SCARF: Securing Chips with a Robust Framework against Fabrication-time Hardware Trojans [1.8980236415886387]
Hardware Trojans (HTs) can be introduced during IC fabrication.
We propose a comprehensive approach to enhance IC security from front-end to back-end stages of design.
arXiv Detail & Related papers (2024-02-19T14:18:08Z) - Designing Secure Interconnects for Modern Microelectronics: From SoCs to Emerging Chiplet-Based Architectures [0.0]
Research focuses on securing Network-on-Chip (NoC) interconnects in System-on-Chip (SoC) architectures.<n>Research builds on two methodologies: ObNoCs and POTENT.<n>New challenges, such as safeguarding inter-chiplet communication and interposer design, are addressed through enhanced obfuscation, authentication, and encryption mechanisms.
arXiv Detail & Related papers (2023-07-11T21:49:45Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.