A New Security Boundary of Component Differentially Challenged XOR PUFs
Against Machine Learning Modeling Attacks
- URL: http://arxiv.org/abs/2206.01314v1
- Date: Thu, 2 Jun 2022 21:51:39 GMT
- Title: A New Security Boundary of Component Differentially Challenged XOR PUFs
Against Machine Learning Modeling Attacks
- Authors: Gaoxiang Li, Khalid T. Mursi, Ahmad O. Aseeri, Mohammed S. Alkatheiri
and Yu Zhuang
- Abstract summary: The XOR Arbiter PUF (XOR PUF or XPUF) is an intensively studied PUF invented to improve the security of the Arbiter PUF.
Recently, highly powerful machine learning attack methods were discovered and were able to easily break large-sized XPUFs.
In this paper, the two current most powerful two machine learning methods for attacking XPUFs are adapted by fine-tuning the parameters of the two methods for CDC-XPUFs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physical Unclonable Functions (PUFs) are promising security primitives for
resource-constrained network nodes. The XOR Arbiter PUF (XOR PUF or XPUF) is an
intensively studied PUF invented to improve the security of the Arbiter PUF,
probably the most lightweight delay-based PUF. Recently, highly powerful
machine learning attack methods were discovered and were able to easily break
large-sized XPUFs, which were highly secure against earlier machine learning
attack methods. Component-differentially-challenged XPUFs (CDC-XPUFs) are XPUFs
with different component PUFs receiving different challenges. Studies showed
they were much more secure against machine learning attacks than the
conventional XPUFs, whose component PUFs receive the same challenge. But these
studies were all based on earlier machine learning attack methods, and hence it
is not clear if CDC-XPUFs can remain secure under the recently discovered
powerful attack methods. In this paper, the two current most powerful two
machine learning methods for attacking XPUFs are adapted by fine-tuning the
parameters of the two methods for CDC-XPUFs. Attack experiments using both
simulated PUF data and silicon data generated from PUFs implemented on
field-programmable gate array (FPGA) were carried out, and the experimental
results showed that some previously secure CDC-XPUFs of certain circuit
parameter values are no longer secure under the adapted new attack methods,
while many more CDC-XPUFs of other circuit parameter values remain secure.
Thus, our experimental attack study has re-defined the boundary between the
secure region and the insecure region of the PUF circuit parameter space,
providing PUF manufacturers and IoT security application developers with
valuable information in choosing PUFs with secure parameter values.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Designing Short-Stage CDC-XPUFs: Balancing Reliability, Cost, and
Security in IoT Devices [2.28438857884398]
Physically Unclonable Functions (PUFs) generate unique cryptographic keys from inherent hardware variations.
Traditional PUFs like Arbiter PUFs (APUFs) and XOR Arbiter PUFs (XOR-PUFs) are susceptible to machine learning (ML) and reliability-based attacks.
We propose an optimized CDC-XPUF design that incorporates a pre-selection strategy to enhance reliability and introduces a novel lightweight architecture.
arXiv Detail & Related papers (2024-09-26T14:50:20Z) - Reflecting on the State of Rehearsal-free Continual Learning with Pretrained Models [63.11967672725459]
We show how P-RFCL techniques can be matched by a simple and lightweight PEFT baseline.
We show how most often, P-RFCL techniques can be matched by a simple and lightweight PEFT baseline.
arXiv Detail & Related papers (2024-06-13T17:57:10Z) - MoE-FFD: Mixture of Experts for Generalized and Parameter-Efficient Face Forgery Detection [54.545054873239295]
Deepfakes have recently raised significant trust issues and security concerns among the public.
ViT-based methods take advantage of the expressivity of transformers, achieving superior detection performance.
This work introduces Mixture-of-Experts modules for Face Forgery Detection (MoE-FFD), a generalized yet parameter-efficient ViT-based approach.
arXiv Detail & Related papers (2024-04-12T13:02:08Z) - Attacking Delay-based PUFs with Minimal Adversary Model [13.714598539443513]
Physically Unclonable Functions (PUFs) provide a streamlined solution for lightweight device authentication.
Delay-based Arbiter PUFs, with their ease of implementation and vast challenge space, have received significant attention.
Research is polarized between developing modelling-resistant PUFs and devising machine learning attacks against them.
arXiv Detail & Related papers (2024-03-01T11:35:39Z) - Strong Baselines for Parameter Efficient Few-Shot Fine-tuning [50.83426196335385]
Few-shot classification (FSC) entails learning novel classes given only a few examples per class after a pre-training (or meta-training) phase.
Recent works have shown that simply fine-tuning a pre-trained Vision Transformer (ViT) on new test classes is a strong approach for FSC.
Fine-tuning ViTs, however, is expensive in time, compute and storage.
This has motivated the design of parameter efficient fine-tuning (PEFT) methods which fine-tune only a fraction of the Transformer's parameters.
arXiv Detail & Related papers (2023-04-04T16:14:39Z) - Lightweight Strategy for XOR PUFs as Security Primitives for
Resource-constrained IoT device [0.0]
XOR Arbiter PUF (XOR-PUF) is one of the most studied PUFs.
Recent attack studies reveal that even XOR-PUFs with large XOR sizes are still not safe against machine learning attacks.
We present a strategy that combines the choice of XOR Arbiter PUF (XOR-PUF) architecture parameters with the way XOR-PUFs are used.
arXiv Detail & Related papers (2022-10-04T17:12:36Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - PUF-Phenotype: A Robust and Noise-Resilient Approach to Aid
Intra-Group-based Authentication with DRAM-PUFs Using Machine Learning [10.445311342905118]
We propose a classification system using Machine Learning (ML) to accurately identify the origin of noisy memory derived (DRAM) PUF responses.
We achieve up to 98% classification accuracy using a modified deep convolutional neural network (CNN) for feature extraction.
arXiv Detail & Related papers (2022-07-11T08:13:08Z) - Going Deep: Using deep learning techniques with simplified mathematical
models against XOR BR and TBR PUFs (Attacks and Countermeasures) [0.0]
This paper contributes to the study of PUFs vulnerability against modeling attacks using a simplified mathematical model and deep learning (DL) techniques.
DL modeling attacks could easily break the security of 4-input XOR BR PUFs and 4-input XOR PUFs with modeling accuracy $sim$ 99%.
A new obfuscated architecture is introduced as a step to counter DL modeling attacks and it showed significant resistance against such attacks.
arXiv Detail & Related papers (2020-09-09T01:41:57Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.