Designing a Photonic Physically Unclonable Function Having Resilience to Machine Learning Attacks
- URL: http://arxiv.org/abs/2404.02440v1
- Date: Wed, 3 Apr 2024 03:58:21 GMT
- Title: Designing a Photonic Physically Unclonable Function Having Resilience to Machine Learning Attacks
- Authors: Elena R. Henderson, Jessie M. Henderson, Hiva Shahoei, William V. Oxford, Eric C. Larson, Duncan L. MacFarlane, Mitchell A. Thornton,
- Abstract summary: We describe a computational PUF model for producing datasets required for training machine learning (ML) attacks.
We find that the modeled PUF generates distributions that resemble uniform white noise.
Preliminary analysis suggests that the PUF exhibits similar resilience to generative adversarial networks.
- Score: 2.369276238599885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physically unclonable functions (PUFs) are designed to act as device 'fingerprints.' Given an input challenge, the PUF circuit should produce an unpredictable response for use in situations such as root-of-trust applications and other hardware-level cybersecurity applications. PUFs are typically subcircuits present within integrated circuits (ICs), and while conventional IC PUFs are well-understood, several implementations have proven vulnerable to malicious exploits, including those perpetrated by machine learning (ML)-based attacks. Such attacks can be difficult to prevent because they are often designed to work even when relatively few challenge-response pairs are known in advance. Hence the need for both more resilient PUF designs and analysis of ML-attack susceptibility. Previous work has developed a PUF for photonic integrated circuits (PICs). A PIC PUF not only produces unpredictable responses given manufacturing-introduced tolerances, but is also less prone to electromagnetic radiation eavesdropping attacks than a purely electronic IC PUF. In this work, we analyze the resilience of the proposed photonic PUF when subjected to ML-based attacks. Specifically, we describe a computational PUF model for producing the large datasets required for training ML attacks; we analyze the quality of the model; and we discuss the modeled PUF's susceptibility to ML-based attacks. We find that the modeled PUF generates distributions that resemble uniform white noise, explaining the exhibited resilience to neural-network-based attacks designed to exploit latent relationships between challenges and responses. Preliminary analysis suggests that the PUF exhibits similar resilience to generative adversarial networks, and continued development will show whether more-sophisticated ML approaches better compromise the PUF and -- if so -- how design modifications might improve resilience.
Related papers
- PhenoAuth: A Novel PUF-Phenotype-based Authentication Protocol for IoT Devices [9.608432807038083]
This work proposes a full noise-tolerant authentication protocol based on the PUF Phenotype concept.
It demonstrates mutual authentication and forward secrecy in a setting suitable for device-to-device communication.
arXiv Detail & Related papers (2024-03-06T06:04:32Z) - A Photonic Physically Unclonable Function's Resilience to
Multiple-Valued Machine Learning Attacks [2.271444649286985]
Physically unclonable functions (PUFs) identify integrated circuits using nonlinearly-related challenge-response pairs (CRPs)
We find that approximately 1,000 CRPs are necessary to train models that predict response bits better than random chance.
Given the significant challenge of acquiring a vast number of CRPs from a photonic PUF, our results demonstrate photonic PUF resilience against such attacks.
arXiv Detail & Related papers (2024-03-02T19:44:19Z) - Attacking Delay-based PUFs with Minimal Adversary Model [13.714598539443513]
Physically Unclonable Functions (PUFs) provide a streamlined solution for lightweight device authentication.
Delay-based Arbiter PUFs, with their ease of implementation and vast challenge space, have received significant attention.
Research is polarized between developing modelling-resistant PUFs and devising machine learning attacks against them.
arXiv Detail & Related papers (2024-03-01T11:35:39Z) - CycPUF: Cyclic Physical Unclonable Function [1.104960878651584]
We introduce feedback signals into traditional delay-based PUF designs to give them a wider range of possible output behaviors.
Based on our analysis, cyclic PUFs produce responses that can be binary, steady-state, oscillating, or pseudo-random under fixed challenges.
The security gain of the proposed cyclic PUFs is also shown against state-of-the-art attacks.
arXiv Detail & Related papers (2024-02-12T22:04:04Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - PUF-Phenotype: A Robust and Noise-Resilient Approach to Aid
Intra-Group-based Authentication with DRAM-PUFs Using Machine Learning [10.445311342905118]
We propose a classification system using Machine Learning (ML) to accurately identify the origin of noisy memory derived (DRAM) PUF responses.
We achieve up to 98% classification accuracy using a modified deep convolutional neural network (CNN) for feature extraction.
arXiv Detail & Related papers (2022-07-11T08:13:08Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Covert Model Poisoning Against Federated Learning: Algorithm Design and
Optimization [76.51980153902774]
Federated learning (FL) is vulnerable to external attacks on FL models during parameters transmissions.
In this paper, we propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms.
Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.
arXiv Detail & Related papers (2021-01-28T03:28:18Z) - Estimating Structural Target Functions using Machine Learning and
Influence Functions [103.47897241856603]
We propose a new framework for statistical machine learning of target functions arising as identifiable functionals from statistical models.
This framework is problem- and model-agnostic and can be used to estimate a broad variety of target parameters of interest in applied statistics.
We put particular focus on so-called coarsening at random/doubly robust problems with partially unobserved information.
arXiv Detail & Related papers (2020-08-14T16:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.