Active learning for fast and slow modeling attacks on Arbiter PUFs
- URL: http://arxiv.org/abs/2308.13645v1
- Date: Fri, 25 Aug 2023 19:34:01 GMT
- Title: Active learning for fast and slow modeling attacks on Arbiter PUFs
- Authors: Vincent Dumoulin, Wenjing Rao, and Natasha Devroye
- Abstract summary: In most modeling attacks, a random subset of challenge-response-pairs (CRPs) are used as the labeled data for the machine learning algorithm.
We focus on challenge selection to help SVM algorithm learn fast'' and learn slow''
- Score: 7.8713273072725665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling attacks, in which an adversary uses machine learning techniques to
model a hardware-based Physically Unclonable Function (PUF) pose a great threat
to the viability of these hardware security primitives. In most modeling
attacks, a random subset of challenge-response-pairs (CRPs) are used as the
labeled data for the machine learning algorithm. Here, for the arbiter-PUF, a
delay based PUF which may be viewed as a linear threshold function with random
weights (due to manufacturing imperfections), we investigate the role of active
learning in Support Vector Machine (SVM) learning. We focus on challenge
selection to help SVM algorithm learn ``fast'' and learn ``slow''. Our methods
construct challenges rather than relying on a sample pool of challenges as in
prior work. Using active learning to learn ``fast'' (less CRPs revealed, higher
accuracies) may help manufacturers learn the manufactured PUFs more
efficiently, or may form a more powerful attack when the attacker may query the
PUF for CRPs at will. Using active learning to select challenges from which
learning is ``slow'' (low accuracy despite a large number of revealed CRPs) may
provide a basis for slowing down attackers who are limited to overhearing CRPs.
Related papers
- Multi-agent Reinforcement Learning-based Network Intrusion Detection System [3.4636217357968904]
Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks.
We propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection.
Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns.
arXiv Detail & Related papers (2024-07-08T09:18:59Z) - Designing a Photonic Physically Unclonable Function Having Resilience to Machine Learning Attacks [2.369276238599885]
We describe a computational PUF model for producing datasets required for training machine learning (ML) attacks.
We find that the modeled PUF generates distributions that resemble uniform white noise.
Preliminary analysis suggests that the PUF exhibits similar resilience to generative adversarial networks.
arXiv Detail & Related papers (2024-04-03T03:58:21Z) - InferAligner: Inference-Time Alignment for Harmlessness through
Cross-Model Guidance [56.184255657175335]
We develop textbfInferAligner, a novel inference-time alignment method that utilizes cross-model guidance for harmlessness alignment.
Experimental results show that our method can be very effectively applied to domain-specific models in finance, medicine, and mathematics.
It significantly diminishes the Attack Success Rate (ASR) of both harmful instructions and jailbreak attacks, while maintaining almost unchanged performance in downstream tasks.
arXiv Detail & Related papers (2024-01-20T10:41:03Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Secrets of RLHF in Large Language Models Part I: PPO [81.01936993929127]
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence.
reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit.
In this report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training.
arXiv Detail & Related papers (2023-07-11T01:55:24Z) - Lightweight Strategy for XOR PUFs as Security Primitives for
Resource-constrained IoT device [0.0]
XOR Arbiter PUF (XOR-PUF) is one of the most studied PUFs.
Recent attack studies reveal that even XOR-PUFs with large XOR sizes are still not safe against machine learning attacks.
We present a strategy that combines the choice of XOR Arbiter PUF (XOR-PUF) architecture parameters with the way XOR-PUFs are used.
arXiv Detail & Related papers (2022-10-04T17:12:36Z) - PUF-Phenotype: A Robust and Noise-Resilient Approach to Aid
Intra-Group-based Authentication with DRAM-PUFs Using Machine Learning [10.445311342905118]
We propose a classification system using Machine Learning (ML) to accurately identify the origin of noisy memory derived (DRAM) PUF responses.
We achieve up to 98% classification accuracy using a modified deep convolutional neural network (CNN) for feature extraction.
arXiv Detail & Related papers (2022-07-11T08:13:08Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - A Generative Model based Adversarial Security of Deep Learning and
Linear Classifier Models [0.0]
We have proposed a mitigation method for adversarial attacks against machine learning models with an autoencoder model.
The main idea behind adversarial attacks against machine learning models is to produce erroneous results by manipulating trained models.
We have also presented the performance of autoencoder models to various attack methods from deep neural networks to traditional algorithms.
arXiv Detail & Related papers (2020-10-17T17:18:17Z) - MACER: Attack-free and Scalable Robust Training via Maximizing Certified
Radius [133.47492985863136]
Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.
We propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses.
For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius.
arXiv Detail & Related papers (2020-01-08T05:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.