Lightweight CNN Model Hashing with Higher-Order Statistics and Chaotic Mapping for Piracy Detection and Tamper Localization
- URL: http://arxiv.org/abs/2510.27127v1
- Date: Fri, 31 Oct 2025 03:04:10 GMT
- Title: Lightweight CNN Model Hashing with Higher-Order Statistics and Chaotic Mapping for Piracy Detection and Tamper Localization
- Authors: Kunming Yang, Ling Chen,
- Abstract summary: Perceptual hashing has emerged as an effective approach for identifying pirated models.<n>We propose a lightweight CNN model hashing technique that integrates higher-order statistics (HOS) features with a chaotic mapping mechanism.
- Score: 9.859893936091813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the widespread adoption of deep neural networks (DNNs), protecting intellectual property and detecting unauthorized tampering of models have become pressing challenges. Recently, Perceptual hashing has emerged as an effective approach for identifying pirated models. However, existing methods either rely on neural networks for feature extraction, demanding substantial training resources, or suffer from limited applicability and cannot be universally applied to all convolutional neural networks (CNNs). To address these limitations, we propose a lightweight CNN model hashing technique that integrates higher-order statistics (HOS) features with a chaotic mapping mechanism. Without requiring any auxiliary neural network training, our method enables efficient piracy detection and precise tampering localization. Specifically, we extract skewness, kurtosis, and structural features from the parameters of each network layer to construct a model hash that is both robust and discriminative. Additionally, we introduce chaotic mapping to amplify minor changes in model parameters by exploiting the sensitivity of chaotic systems to initial conditions, thereby facilitating accurate localization of tampered regions. Experimental results validate the effectiveness and practical value of the proposed method for model copyright protection and integrity verification.
Related papers
- Protecting Deep Neural Network Intellectual Property with Chaos-Based White-Box Watermarking [2.667401221288548]
The rapid proliferation of deep neural networks (DNNs) has led to increasing concerns regarding intellectual property (IP) protection and model misuse.<n>We propose an efficient and resilient white-box watermarking framework that embeds ownership information into the internal parameters of a DNN.<n>The proposed method offers a flexible and scalable solution for embedding and verifying model ownership in white-box settings.
arXiv Detail & Related papers (2025-12-18T15:26:50Z) - Certified Neural Approximations of Nonlinear Dynamics [51.01318247729693]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Efficient IoT Intrusion Detection with an Improved Attention-Based CNN-BiLSTM Architecture [0.2356141385409842]
This paper presents a compact and efficient approach to detect botnet attacks by employing an integrated approach.<n>The proposed attention-based model achieves 99% classification accuracy in detecting botnet attacks utilizing the N-BaIoT dataset.
arXiv Detail & Related papers (2025-03-25T04:12:14Z) - Efficient Reachability Analysis for Convolutional Neural Networks Using Hybrid Zonotopes [4.32258850473064]
Existing set propagation-based reachability analysis methods for feedforward neural networks often struggle to achieve both scalability and accuracy.<n>This work presents a novel set-based approach for computing the reachable sets of convolutional neural networks.
arXiv Detail & Related papers (2025-03-13T19:45:26Z) - Evaluating Single Event Upsets in Deep Neural Networks for Semantic Segmentation: an embedded system perspective [1.474723404975345]
This paper delves into the robustness assessment in embedded Deep Neural Networks (DNNs)<n>By scrutinizing the layer-by-layer and bit-by-bit sensitivity of various encoder-decoder models to soft errors, this study thoroughly investigates the vulnerability of segmentation DNNs to SEUs.<n>We propose a set of practical lightweight error mitigation techniques with no memory or computational cost suitable for resource-constrained deployments.
arXiv Detail & Related papers (2024-12-04T18:28:38Z) - Reversible Quantization Index Modulation for Static Deep Neural Network
Watermarking [57.96787187733302]
Reversible data hiding (RDH) methods offer a potential solution, but existing approaches suffer from weaknesses in terms of usability, capacity, and fidelity.
We propose a novel RDH-based static DNN watermarking scheme using quantization index modulation (QIM)
Our scheme incorporates a novel approach based on a one-dimensional quantizer for watermark embedding.
arXiv Detail & Related papers (2023-05-29T04:39:17Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Generating Probabilistic Safety Guarantees for Neural Network
Controllers [30.34898838361206]
We use a dynamics model to determine the output properties that must hold for a neural network controller to operate safely.
We develop an adaptive verification approach to efficiently generate an overapproximation of the neural network policy.
We show that our method is able to generate meaningful probabilistic safety guarantees for aircraft collision avoidance neural networks.
arXiv Detail & Related papers (2021-03-01T18:48:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.