EIM-TRNG: Obfuscating Deep Neural Network Weights with Encoding-in-Memory True Random Number Generator via RowHammer
- URL: http://arxiv.org/abs/2507.02206v1
- Date: Thu, 03 Jul 2025 00:01:33 GMT
- Title: EIM-TRNG: Obfuscating Deep Neural Network Weights with Encoding-in-Memory True Random Number Generator via RowHammer
- Authors: Ranyang Zhou, Abeer Matar A. Almalky, Gamana Aragonda, Sabbir Ahmed, Filip Roth Trønnes-Christensen, Adnan Siraj Rakin, Shaahin Angizi,
- Abstract summary: True Random Number Generators (TRNGs) play a fundamental role in hardware security, cryptographic systems, and data protection.<n>In this work, we propose a novel TRNG called EIM-TRNG that leverages the inherent physical randomness in DRAM cell behavior.<n>We demonstrate how the unpredictable bit-flips generated through carefully controlled RowHammer operations can be harnessed as a reliable entropy source.
- Score: 8.544950313051402
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: True Random Number Generators (TRNGs) play a fundamental role in hardware security, cryptographic systems, and data protection. In the context of Deep NeuralNetworks (DNNs), safeguarding model parameters, particularly weights, is critical to ensure the integrity, privacy, and intel-lectual property of AI systems. While software-based pseudo-random number generators are widely used, they lack the unpredictability and resilience offered by hardware-based TRNGs. In this work, we propose a novel and robust Encoding-in-Memory TRNG called EIM-TRNG that leverages the inherent physical randomness in DRAM cell behavior, particularly under RowHammer-induced disturbances, for the first time. We demonstrate how the unpredictable bit-flips generated through carefully controlled RowHammer operations can be harnessed as a reliable entropy source. Furthermore, we apply this TRNG framework to secure DNN weight data by encoding via a combination of fixed and unpredictable bit-flips. The encrypted data is later decrypted using a key derived from the probabilistic flip behavior, ensuring both data confidentiality and model authenticity. Our results validate the effectiveness of DRAM-based entropy extraction for robust, low-cost hardware security and offer a promising direction for protecting machine learning models at the hardware level.
Related papers
- Adaptive Variation-Resilient Random Number Generator for Embedded Encryption [7.239794172995]
We present an adaptive variation-resilient RNG capable of extracting unbiased encryption-grade random number streams from physically driven entropy sources.<n>The generated unbiased bit streams, due to their higher entropy, then only need to undergo simplified post-processing.
arXiv Detail & Related papers (2025-07-07T22:42:49Z) - AI-Hybrid TRNG: Kernel-Based Deep Learning for Near-Uniform Entropy Harvesting from Physical Noise [0.0]
AI-Hybrid TRNG is a deep-learning framework that extracts near-uniform entropy directly from physical noise.<n>It relies on a low-cost, thumb-sized RF front end, plus CPU-timing jitter, for training, and then emits 32-bit high-entropy streams without any quantization step.
arXiv Detail & Related papers (2025-06-30T18:01:40Z) - Continuous-Variable Source-Independent Quantum Random Number Generator with a Single Phase-Insensitive Detector [0.5439020425819]
Quantum random number generators (QRNGs) harness quantum mechanical unpredictability to produce true randomness.
We propose a novel CV-SI-QRNG scheme with a single phase-insensitive detector, and provide security proof based on semi-definite programming (SDP)
These results demonstrate the feasibility of our framework, paving the way for practical and simple SI-QRNG implementations.
arXiv Detail & Related papers (2024-11-22T09:26:53Z) - Machine Learning needs Better Randomness Standards: Randomised Smoothing
and PRNG-based attacks [14.496582479888765]
We consider whether attackers can compromise an machine learning system using only the randomness on which they commonly rely.
We demonstrate an entirely novel attack, where an attacker backdoors the supplied randomness to falsely certify either an overestimate or an underestimate of robustness for up to 81 times.
We advocate updating the NIST guidelines on random number testing to make them more appropriate for safety-critical and security-critical machine-learning applications.
arXiv Detail & Related papers (2023-06-24T19:50:08Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - On the Forward Invariance of Neural ODEs [92.07281135902922]
We propose a new method to ensure neural ordinary differential equations (ODEs) satisfy output specifications.
Our approach uses a class of control barrier functions to transform output specifications into constraints on the parameters and inputs of the learning system.
arXiv Detail & Related papers (2022-10-10T15:18:28Z) - RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems [41.1371349978643]
We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
arXiv Detail & Related papers (2022-08-27T14:50:00Z) - THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption [112.02441503951297]
Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
arXiv Detail & Related papers (2022-06-01T03:49:18Z) - Deep Transformer Networks for Time Series Classification: The NPP Safety
Case [59.20947681019466]
An advanced temporal neural network referred to as the Transformer is used within a supervised learning fashion to model the time-dependent NPP simulation data.
The Transformer can learn the characteristics of the sequential data and yield promising performance with approximately 99% classification accuracy on the testing dataset.
arXiv Detail & Related papers (2021-04-09T14:26:25Z) - A High Speed Integrated Quantum Random Number Generator with on-Chip
Real-Time Randomness Extraction [2.759846687681801]
We present the first integrated Quantum RNG (QRNG) in a standard CMOS technology node.
We show that co-integration of combinational logic, even of high complexity, does not affect the quality of randomness.
Our CMOS QRNG can reach up to 400 Mbit/s throughput with low power consumption.
arXiv Detail & Related papers (2021-02-11T19:55:29Z) - Security Analysis and Improvement of Source Independent Quantum Random
Number Generators with Imperfect Devices [21.524683492769526]
A quantum random number generator (QRNG) is essential in many applications, such as number simulation and cryptography.
Recently, a source-independent quantum random number generator (SI-QRNG), which can generate secure random numbers with untrusted sources, has been realized.
Here, we point out and evaluate the security loopholes of practical imperfect measurement devices in SI-QRNGs.
arXiv Detail & Related papers (2021-01-12T07:10:23Z) - RAIN: A Simple Approach for Robust and Accurate Image Classification
Networks [156.09526491791772]
It has been shown that the majority of existing adversarial defense methods achieve robustness at the cost of sacrificing prediction accuracy.
This paper proposes a novel preprocessing framework, which we term Robust and Accurate Image classificatioN(RAIN)
RAIN applies randomization over inputs to break the ties between the model forward prediction path and the backward gradient path, thus improving the model robustness.
We conduct extensive experiments on the STL10 and ImageNet datasets to verify the effectiveness of RAIN against various types of adversarial attacks.
arXiv Detail & Related papers (2020-04-24T02:03:56Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.