Going Deep: Using deep learning techniques with simplified mathematical
models against XOR BR and TBR PUFs (Attacks and Countermeasures)
- URL: http://arxiv.org/abs/2009.04063v1
- Date: Wed, 9 Sep 2020 01:41:57 GMT
- Title: Going Deep: Using deep learning techniques with simplified mathematical
models against XOR BR and TBR PUFs (Attacks and Countermeasures)
- Authors: Mahmoud Khalafalla, Mahmoud A. Elmohr, Catherine Gebotys
- Abstract summary: This paper contributes to the study of PUFs vulnerability against modeling attacks using a simplified mathematical model and deep learning (DL) techniques.
DL modeling attacks could easily break the security of 4-input XOR BR PUFs and 4-input XOR PUFs with modeling accuracy $sim$ 99%.
A new obfuscated architecture is introduced as a step to counter DL modeling attacks and it showed significant resistance against such attacks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper contributes to the study of PUFs vulnerability against modeling
attacks by evaluating the security of XOR BR PUFs, XOR TBR PUFs, and obfuscated
architectures of XOR BR PUF using a simplified mathematical model and deep
learning (DL) techniques. Obtained results show that DL modeling attacks could
easily break the security of 4-input XOR BR PUFs and 4-input XOR TBR PUFs with
modeling accuracy $\sim$ 99%. Similar attacks were executed using single-layer
neural networks (NN) and support vector machines (SVM) with polynomial kernel
and the obtained results showed that single NNs failed to break the PUF
security. Furthermore, SVM results confirmed the same modeling accuracy
reported in previous research ($\sim$ 50%). For the first time, this research
empirically shows that DL networks can be used as powerful modeling techniques
against these complex PUF architectures for which previous conventional machine
learning techniques had failed. Furthermore, a detailed scalability analysis is
conducted on the DL networks with respect to PUFs' stage size and complexity.
The analysis shows that the number of layers and hidden neurons inside every
layer has a linear relationship with PUFs' stage size, which agrees with the
theoretical findings in deep learning. Consequently, A new obfuscated
architecture is introduced as a first step to counter DL modeling attacks and
it showed significant resistance against such attacks (16% - 40% less
accuracy). This research provides an important step towards prioritizing the
efforts to introduce new PUF architectures that are more secure and
invulnerable to modeling attacks. Moreover, it triggers future discussions on
the removal of influential bits and the level of obfuscation needed to confirm
that a specific PUF architecture is resistant against powerful DL modeling
attacks.
Related papers
- Designing a Photonic Physically Unclonable Function Having Resilience to Machine Learning Attacks [2.369276238599885]
We describe a computational PUF model for producing datasets required for training machine learning (ML) attacks.
We find that the modeled PUF generates distributions that resemble uniform white noise.
Preliminary analysis suggests that the PUF exhibits similar resilience to generative adversarial networks.
arXiv Detail & Related papers (2024-04-03T03:58:21Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - Attacking Delay-based PUFs with Minimal Adversary Model [13.714598539443513]
Physically Unclonable Functions (PUFs) provide a streamlined solution for lightweight device authentication.
Delay-based Arbiter PUFs, with their ease of implementation and vast challenge space, have received significant attention.
Research is polarized between developing modelling-resistant PUFs and devising machine learning attacks against them.
arXiv Detail & Related papers (2024-03-01T11:35:39Z) - Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL [57.745700271150454]
We study the sample complexity of reinforcement learning in Mean-Field Games (MFGs) with model-based function approximation.
We introduce the Partial Model-Based Eluder Dimension (P-MBED), a more effective notion to characterize the model class complexity.
arXiv Detail & Related papers (2024-02-08T14:54:47Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - PUF-Phenotype: A Robust and Noise-Resilient Approach to Aid
Intra-Group-based Authentication with DRAM-PUFs Using Machine Learning [10.445311342905118]
We propose a classification system using Machine Learning (ML) to accurately identify the origin of noisy memory derived (DRAM) PUF responses.
We achieve up to 98% classification accuracy using a modified deep convolutional neural network (CNN) for feature extraction.
arXiv Detail & Related papers (2022-07-11T08:13:08Z) - A New Security Boundary of Component Differentially Challenged XOR PUFs
Against Machine Learning Modeling Attacks [0.0]
The XOR Arbiter PUF (XOR PUF or XPUF) is an intensively studied PUF invented to improve the security of the Arbiter PUF.
Recently, highly powerful machine learning attack methods were discovered and were able to easily break large-sized XPUFs.
In this paper, the two current most powerful two machine learning methods for attacking XPUFs are adapted by fine-tuning the parameters of the two methods for CDC-XPUFs.
arXiv Detail & Related papers (2022-06-02T21:51:39Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - RAB: Provable Robustness Against Backdoor Attacks [20.702977915926787]
We focus on certifying the machine learning model robustness against general threat models, especially backdoor attacks.
We propose the first robust training process, RAB, to smooth the trained model and certify its robustness against backdoor attacks.
We conduct comprehensive experiments for different machine learning (ML) models and provide the first benchmark for certified robustness against backdoor attacks.
arXiv Detail & Related papers (2020-03-19T17:05:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.