Scalable IP Mimicry: End-to-End Deceptive IP Blending to Overcome Rectification and Scale Limitations of IP Camouflage
- URL: http://arxiv.org/abs/2512.12061v1
- Date: Fri, 12 Dec 2025 22:04:09 GMT
- Title: Scalable IP Mimicry: End-to-End Deceptive IP Blending to Overcome Rectification and Scale Limitations of IP Camouflage
- Authors: Junling Fan, George Rushevich, Giorgio Rusconi, Mengdi Zhu, Reiner Dizon-Paradis, Domenic Forte,
- Abstract summary: IP theft incurs estimated annual losses ranging from $225 billion to $600 billion.<n>Many semiconductor designs remain vulnerable to reverse engineering (RE)<n>IP Camouflage is a breakthrough that expands beyond the logic gate hiding of traditional camouflage through "mimetic deception"
- Score: 2.42908752270213
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semiconductor intellectual property (IP) theft incurs estimated annual losses ranging from $225 billion to $600 billion. Despite initiatives like the CHIPS Act, many semiconductor designs remain vulnerable to reverse engineering (RE). IP Camouflage is a recent breakthrough that expands beyond the logic gate hiding of traditional camouflage through "mimetic deception," where an entire module masquerades as a different IP. However, it faces key limitations: requires a high-overhead post-generation rectification step, is not easily scalable, and uses an AIG logic representation that is mismatched with standard RE analysis flows. This paper addresses these shortcommings by introducing two novel, end-to-end models. We propose a Graph-Matching algorithm to solve the representation problem and a DNAS-based NAND Array model to achieve scalability. To facilitate this, we also introduce a mimicry-aware partitioning method, enabling a divide-and-conquer approach for large-scale designs. Our results demonstrate that these models are resilient to SAT and GNN-RE attacks, providing efficient and scalable paths for end-to-end deceptive IP design.
Related papers
- TABES: Trajectory-Aware Backward-on-Entropy Steering for Masked Diffusion Models [35.327100592206115]
Backward-on-Entropy (BoE) Steering is a gradient-guided inference framework that approximates infinite-horizon context via a single backward pass.<n>To ensure scalability, we introduce ttexttActiveQueryAttention, a sparse adjoint primitive that exploits the structure of the masking objective to reduce backward pass complexity.
arXiv Detail & Related papers (2026-01-30T19:10:32Z) - Revisiting Weighted Strategy for Non-stationary Parametric Bandits and MDPs [56.246783503873225]
This paper revisits the weighted strategy for non-stationary parametric bandits.<n>We propose a simpler weight-based algorithm that is as efficient as window/restart-based algorithms.<n>Our framework can be used to improve regret bounds of other parametric bandits.
arXiv Detail & Related papers (2026-01-03T04:50:21Z) - Automated discovery of finite volume schemes using Graph Neural Networks [2.867517731896504]
We establish that Graph Neural Networks (GNNs) can serve purposes beyond their traditional role.<n>We show that a GNN trained on a dataset consisting solely of two-node graphs can extrapolate a first-order Finite Volume scheme.<n>Using symbolic regression, we show that the network effectively rediscovers the exact analytical formulation of the standard first-order FV scheme.
arXiv Detail & Related papers (2025-08-26T14:08:46Z) - IDEA: An Inverse Domain Expert Adaptation Based Active DNN IP Protection Method [8.717704777664604]
Illegitimate reproduction, distribution and derivation of Deep Neural Network (DNN) models can inflict economic loss, reputation damage and even privacy infringement.<n>We propose IDEA, an Inverse Domain Expert Adaptation based proactive DNN IP protection method featuring active authorization and source traceability.<n>We extensively evaluate IDEA on five datasets and four DNN models to demonstrate its effectiveness in authorization control, culprit tracing success rate, and against various attacks.
arXiv Detail & Related papers (2024-09-29T09:34:33Z) - Older and Wiser: The Marriage of Device Aging and Intellectual Property Protection of Deep Neural Networks [10.686965180113118]
Deep neural networks (DNNs) are often kept secret due to high training costs and privacy concerns.
We propose a novel hardware-software co-design approach for DNN intellectual property (IP) protection.
Hardware-wise, we employ random aging to produce authorized chips.
Software-wise, we propose a novel DOFT, which allows pre-trained DNNs to maintain their original accuracy on authorized chips.
arXiv Detail & Related papers (2024-06-21T04:49:17Z) - Transaction Fraud Detection via an Adaptive Graph Neural Network [64.9428588496749]
We propose an Adaptive Sampling and Aggregation-based Graph Neural Network (ASA-GNN) that learns discriminative representations to improve the performance of transaction fraud detection.
A neighbor sampling strategy is performed to filter noisy nodes and supplement information for fraudulent nodes.
Experiments on three real financial datasets demonstrate that the proposed method ASA-GNN outperforms state-of-the-art ones.
arXiv Detail & Related papers (2023-07-11T07:48:39Z) - Expressive Losses for Verified Robustness via Convex Combinations [67.54357965665676]
We study the relationship between the over-approximation coefficient and performance profiles across different expressive losses.
We show that, while expressivity is essential, better approximations of the worst-case loss are not necessarily linked to superior robustness-accuracy trade-offs.
arXiv Detail & Related papers (2023-05-23T12:20:29Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Fingerprinting Deep Neural Networks Globally via Universal Adversarial
Perturbations [22.89321897726347]
We propose a novel and practical mechanism which enables the service provider to verify whether a suspect model is stolen from the victim model.
Our framework can detect model IP breaches with confidence 99.99 %$ within only $20$ fingerprints of the suspect model.
arXiv Detail & Related papers (2022-02-17T11:29:50Z) - GNN4IP: Graph Neural Network for Hardware Intellectual Property Piracy
Detection [4.575465912399431]
Globalization of the IC supply chain exposes IP providers to theft and illegal redistribution of IPs.
We propose a novel methodology, GNN4IP, to assess similarities between circuits and detect IP piracy.
GNN4IP detects IP piracy with 96% accuracy in our dataset and recognizes the original IP in its obfuscated version with 100% accuracy.
arXiv Detail & Related papers (2021-07-19T20:13:16Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Patch-wise++ Perturbation for Adversarial Targeted Attacks [132.58673733817838]
We propose a patch-wise iterative method (PIM) aimed at crafting adversarial examples with high transferability.
Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the $epsilon$-constraint is properly assigned to its surrounding regions.
Compared with the current state-of-the-art attack methods, we significantly improve the success rate by 35.9% for defense models and 32.7% for normally trained models.
arXiv Detail & Related papers (2020-12-31T08:40:42Z) - Passport-aware Normalization for Deep Model Protection [122.61289882357022]
We propose a new passport-aware normalization formulation for deep learning models.
It only needs to add another passport-aware branch for IP protection.
It is demonstrated to be robust not only to common attack techniques like fine-tuning and model compression, but also to ambiguity attacks.
arXiv Detail & Related papers (2020-10-29T17:57:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.