A novel reliability attack of Physical Unclonable Functions
- URL: http://arxiv.org/abs/2405.13147v2
- Date: Fri, 7 Jun 2024 19:56:12 GMT
- Title: A novel reliability attack of Physical Unclonable Functions
- Authors: Gaoxiang Li, Yu Zhuang,
- Abstract summary: Physical Unclonable Functions (PUFs) are emerging as promising security primitives for IoT devices.
Despite their strengths, PUFs are vulnerable to machine learning (ML) attacks, including conventional and reliability-based attacks.
- Score: 1.9336815376402723
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physical Unclonable Functions (PUFs) are emerging as promising security primitives for IoT devices, providing device fingerprints based on physical characteristics. Despite their strengths, PUFs are vulnerable to machine learning (ML) attacks, including conventional and reliability-based attacks. Conventional ML attacks have been effective in revealing vulnerabilities of many PUFs, and reliability-based ML attacks are more powerful tools that have detected vulnerabilities of some PUFs that are resistant to conventional ML attacks. Since reliability-based ML attacks leverage information of PUFs' unreliability, we were tempted to examine the feasibility of building defense using reliability enhancing techniques, and have discovered that majority voting with reasonably high repeats provides effective defense against existing reliability-based ML attack methods. It is known that majority voting reduces but does not eliminate unreliability, we are motivated to investigate if new attack methods exist that can capture the low unreliability of highly but not-perfectly reliable PUFs, which led to the development of a new reliability representation and the new representation-enabled attack method that has experimentally cracked PUFs enhanced with majority voting of high repetitions.
Related papers
- Designing Short-Stage CDC-XPUFs: Balancing Reliability, Cost, and
Security in IoT Devices [2.28438857884398]
Physically Unclonable Functions (PUFs) generate unique cryptographic keys from inherent hardware variations.
Traditional PUFs like Arbiter PUFs (APUFs) and XOR Arbiter PUFs (XOR-PUFs) are susceptible to machine learning (ML) and reliability-based attacks.
We propose an optimized CDC-XPUF design that incorporates a pre-selection strategy to enhance reliability and introduces a novel lightweight architecture.
arXiv Detail & Related papers (2024-09-26T14:50:20Z) - Tamper-Resistant Safeguards for Open-Weight LLMs [57.90526233549399]
We develop a method for building tamper-resistant safeguards into open-weight LLMs.
We find that our method greatly improves tamper-resistance while preserving benign capabilities.
Our results demonstrate that tamper-resistance is a tractable problem.
arXiv Detail & Related papers (2024-08-01T17:59:12Z) - Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models [65.30406788716104]
This work investigates the vulnerabilities of security-enhancing diffusion models.
We demonstrate that these models are highly susceptible to DIFF2, a simple yet effective backdoor attack.
Case studies show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models.
arXiv Detail & Related papers (2024-06-14T02:39:43Z) - Designing a Photonic Physically Unclonable Function Having Resilience to Machine Learning Attacks [2.369276238599885]
We describe a computational PUF model for producing datasets required for training machine learning (ML) attacks.
We find that the modeled PUF generates distributions that resemble uniform white noise.
Preliminary analysis suggests that the PUF exhibits similar resilience to generative adversarial networks.
arXiv Detail & Related papers (2024-04-03T03:58:21Z) - Attacking Delay-based PUFs with Minimal Adversary Model [13.714598539443513]
Physically Unclonable Functions (PUFs) provide a streamlined solution for lightweight device authentication.
Delay-based Arbiter PUFs, with their ease of implementation and vast challenge space, have received significant attention.
Research is polarized between developing modelling-resistant PUFs and devising machine learning attacks against them.
arXiv Detail & Related papers (2024-03-01T11:35:39Z) - FedRDF: A Robust and Dynamic Aggregation Function against Poisoning
Attacks in Federated Learning [0.0]
Federated Learning (FL) represents a promising approach to typical privacy concerns associated with centralized Machine Learning (ML) deployments.
Despite its well-known advantages, FL is vulnerable to security attacks such as Byzantine behaviors and poisoning attacks.
Our proposed approach was tested against various model poisoning attacks, demonstrating superior performance over state-of-the-art aggregation methods.
arXiv Detail & Related papers (2024-02-15T16:42:04Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Attention-Based Real-Time Defenses for Physical Adversarial Attacks in
Vision Applications [58.06882713631082]
Deep neural networks exhibit excellent performance in computer vision tasks, but their vulnerability to real-world adversarial attacks raises serious security concerns.
This paper proposes an efficient attention-based defense mechanism that exploits adversarial channel-attention to quickly identify and track malicious objects in shallow network layers.
It also introduces an efficient multi-frame defense framework, validating its efficacy through extensive experiments aimed at evaluating both defense performance and computational cost.
arXiv Detail & Related papers (2023-11-19T00:47:17Z) - PUF-Phenotype: A Robust and Noise-Resilient Approach to Aid
Intra-Group-based Authentication with DRAM-PUFs Using Machine Learning [10.445311342905118]
We propose a classification system using Machine Learning (ML) to accurately identify the origin of noisy memory derived (DRAM) PUF responses.
We achieve up to 98% classification accuracy using a modified deep convolutional neural network (CNN) for feature extraction.
arXiv Detail & Related papers (2022-07-11T08:13:08Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Testing Robustness Against Unforeseen Adversaries [54.75108356391557]
Adversarial robustness research primarily focuses on L_p perturbations.
In real-world applications developers are unlikely to have access to the full range of attacks or corruptions their system will face.
We introduce ImageNet-UA, a framework for evaluating model robustness against a range of unforeseen adversaries.
arXiv Detail & Related papers (2019-08-21T17:36:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.