Breaking XOR Arbiter PUFs with Chosen Challenge Attack
- URL: http://arxiv.org/abs/2312.01256v2
- Date: Sat, 26 Apr 2025 23:06:58 GMT
- Title: Breaking XOR Arbiter PUFs with Chosen Challenge Attack
- Authors: Niloufar Sayadi, Phuong Ha Nguyen, Marten van Dijk, Chenglu Jin,
- Abstract summary: The XOR Arbiter PUF was introduced as a strong PUF in 2007 and was broken in 2015 by a Machine Learning (ML) attack.<n>We show that, textbffor the first time, a perfectly reliable XOR Arbiter PUF can be successfully attacked in a divide-and-conquer manner.<n>This allows us to attack large XOR Arbiter PUFs efficiently, even without reliability information or any side-channel information.
- Score: 13.358255319545789
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The XOR Arbiter PUF was introduced as a strong PUF in 2007 and was broken in 2015 by a Machine Learning (ML) attack, which allows the underlying Arbiter PUFs to be modeled individually by exploiting reliability information of the measured responses. To mitigate the reliability-based attacks, state-of-the-art understanding shows that the reliability of individual Arbiter PUFs and the overall XOR Arbiter PUF can be boosted to an arbitrarily high level, thus rendering all known reliability-based ML attacks infeasible; alternatively, an access control interface around the XOR Arbiter PUF can prevent the same challenge-response pairs from being accessed repeatedly, thus eliminating the leakage of reliability information. We show that, \textbf{for the first time, a perfectly reliable XOR Arbiter PUF can be successfully attacked in a divide-and-conquer manner}, meaning each underlying Arbiter PUF in an XOR Arbiter PUF can be attacked individually. This allows us to attack large XOR Arbiter PUFs efficiently, even without reliability information or any side-channel information. Our key insight is that, instead of reliability information, the responses of highly correlated challenges also reveal how close the responses are to the response decision boundary. This leads to a \textit{chosen challenge attack} on XOR Arbiter PUFs by carefully choosing correlated challenges to measure and aggregate the collected information. We validate our attack by using PUF simulation, as well as an XOR Arbiter PUF implemented on FPGA. We also demonstrate that our chosen challenge methodology is compatible with the state-of-the-art combined gradient-based multi-objective optimization attack. Finally, we discuss an effective countermeasure that can prevent our attack but with a relatively large area overhead compared to the PUF itself.
Related papers
- Designing a Photonic Physically Unclonable Function Having Resilience to Machine Learning Attacks [2.369276238599885]
We describe a computational PUF model for producing datasets required for training machine learning (ML) attacks.
We find that the modeled PUF generates distributions that resemble uniform white noise.
Preliminary analysis suggests that the PUF exhibits similar resilience to generative adversarial networks.
arXiv Detail & Related papers (2024-04-03T03:58:21Z) - Attacking Delay-based PUFs with Minimal Adversary Model [13.714598539443513]
Physically Unclonable Functions (PUFs) provide a streamlined solution for lightweight device authentication.
Delay-based Arbiter PUFs, with their ease of implementation and vast challenge space, have received significant attention.
Research is polarized between developing modelling-resistant PUFs and devising machine learning attacks against them.
arXiv Detail & Related papers (2024-03-01T11:35:39Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Lightweight Strategy for XOR PUFs as Security Primitives for
Resource-constrained IoT device [0.0]
XOR Arbiter PUF (XOR-PUF) is one of the most studied PUFs.
Recent attack studies reveal that even XOR-PUFs with large XOR sizes are still not safe against machine learning attacks.
We present a strategy that combines the choice of XOR Arbiter PUF (XOR-PUF) architecture parameters with the way XOR-PUFs are used.
arXiv Detail & Related papers (2022-10-04T17:12:36Z) - PUF-Phenotype: A Robust and Noise-Resilient Approach to Aid
Intra-Group-based Authentication with DRAM-PUFs Using Machine Learning [10.445311342905118]
We propose a classification system using Machine Learning (ML) to accurately identify the origin of noisy memory derived (DRAM) PUF responses.
We achieve up to 98% classification accuracy using a modified deep convolutional neural network (CNN) for feature extraction.
arXiv Detail & Related papers (2022-07-11T08:13:08Z) - A New Security Boundary of Component Differentially Challenged XOR PUFs
Against Machine Learning Modeling Attacks [0.0]
The XOR Arbiter PUF (XOR PUF or XPUF) is an intensively studied PUF invented to improve the security of the Arbiter PUF.
Recently, highly powerful machine learning attack methods were discovered and were able to easily break large-sized XPUFs.
In this paper, the two current most powerful two machine learning methods for attacking XPUFs are adapted by fine-tuning the parameters of the two methods for CDC-XPUFs.
arXiv Detail & Related papers (2022-06-02T21:51:39Z) - Adversarial Attacks on ML Defense Models Competition [82.37504118766452]
The TSAIL group at Tsinghua University and the Alibaba Security group organized this competition.
The purpose of this competition is to motivate novel attack algorithms to evaluate adversarial robustness.
arXiv Detail & Related papers (2021-10-15T12:12:41Z) - Quality of Service Guarantees for Physical Unclonable Functions [90.99207266853986]
noisy physical unclonable function (PUF) outputs facilitate reliable, secure, and private key agreement.
We introduce a quality of service parameter to control the percentage of PUF outputs for which a target reliability level can be guaranteed.
arXiv Detail & Related papers (2021-07-12T18:26:08Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - Adversarial Training with Rectified Rejection [114.83821848791206]
We propose to use true confidence (T-Con) as a certainty oracle, and learn to predict T-Con by rectifying confidence.
We prove that under mild conditions, a rectified confidence (R-Con) rejector and a confidence rejector can be coupled to distinguish any wrongly classified input from correctly classified ones.
arXiv Detail & Related papers (2021-05-31T08:24:53Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - Going Deep: Using deep learning techniques with simplified mathematical
models against XOR BR and TBR PUFs (Attacks and Countermeasures) [0.0]
This paper contributes to the study of PUFs vulnerability against modeling attacks using a simplified mathematical model and deep learning (DL) techniques.
DL modeling attacks could easily break the security of 4-input XOR BR PUFs and 4-input XOR PUFs with modeling accuracy $sim$ 99%.
A new obfuscated architecture is introduced as a step to counter DL modeling attacks and it showed significant resistance against such attacks.
arXiv Detail & Related papers (2020-09-09T01:41:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.