Polynomial Bounds for Learning Noisy Optical Physical Unclonable
Functions and Connections to Learning With Errors
- URL: http://arxiv.org/abs/2308.09199v2
- Date: Thu, 7 Sep 2023 16:33:48 GMT
- Title: Polynomial Bounds for Learning Noisy Optical Physical Unclonable
Functions and Connections to Learning With Errors
- Authors: Apollo Albright, Boris Gelfand, Michael Dixon
- Abstract summary: It is shown that a class of optical physical unclonable functions (PUFs) can be learned to precision with arbitrarily high probability, even in the presence of noise.
This extends the results of Rh"uramir et al. (2013), who showed a subset of this class of PUFs to be learnable in time in the absence of noise.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is shown that a class of optical physical unclonable functions (PUFs) can
be learned to arbitrary precision with arbitrarily high probability, even in
the presence of noise, given access to polynomially many challenge-response
pairs and polynomially bounded computational power, under mild assumptions
about the distributions of the noise and challenge vectors. This extends the
results of Rh\"uramir et al. (2013), who showed a subset of this class of PUFs
to be learnable in polynomial time in the absence of noise, under the
assumption that the optics of the PUF were either linear or had negligible
nonlinear effects. We derive polynomial bounds for the required number of
samples and the computational complexity of a linear regression algorithm,
based on size parameters of the PUF, the distributions of the challenge and
noise vectors, and the probability and accuracy of the regression algorithm,
with a similar analysis to one done by Bootle et al. (2018), who demonstrated a
learning attack on a poorly implemented version of the Learning With Errors
problem.
Related papers
- Proximal Interacting Particle Langevin Algorithms [0.0]
We introduce Proximal Interacting Particle Langevin Algorithms (PIPLA) for inference and learning in latent variable models.
We propose several variants within the novel proximal IPLA family, tailored to the problem of estimating parameters in a non-differentiable statistical model.
Our theory and experiments together show that PIPLA family can be the de facto choice for parameter estimation problems in latent variable models for non-differentiable models.
arXiv Detail & Related papers (2024-06-20T13:16:41Z) - Reinforcement Learning with Function Approximation: From Linear to
Nonlinear [4.314956204483073]
This paper reviews recent results on error analysis for reinforcement learning algorithms in linear or nonlinear approximation settings.
We discuss various properties related to approximation error and present concrete conditions on transition probability and reward function.
arXiv Detail & Related papers (2023-02-20T00:31:18Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Convergence bounds for nonlinear least squares and applications to
tensor recovery [0.0]
We consider the problem of approximating a function in general nonlinear subsets of $L2$ when only a weighted Monte Carlo estimate of the $L2$-norm can be computed.
A critical analysis of our results allows us to derive a sample efficient algorithm for the model set of low-rank tensors.
arXiv Detail & Related papers (2021-08-11T14:14:02Z) - Polynomial unconstrained binary optimisation inspired by optical
simulation [52.11703556419582]
We propose an algorithm inspired by optical coherent Ising machines to solve the problem of unconstrained binary optimization.
We benchmark the proposed algorithm against existing PUBO algorithms, and observe its superior performance.
The application of our algorithm to protein folding and quantum chemistry problems sheds light on the shortcomings of approxing the electronic structure problem by a PUBO problem.
arXiv Detail & Related papers (2021-06-24T16:39:31Z) - On the Cryptographic Hardness of Learning Single Periodic Neurons [42.86685497609574]
We show a simple reduction which demonstrates the cryptographic hardness of learning a single neuron over isotropic Gaussian distributions in the presence of noise.
Our proposed algorithm is not a gradient-based or an adversarial SQ-time algorithm, but is rather based on the celebrated Lenstra-LenstraLov'asz (LLL) lattice basis reduction algorithm.
arXiv Detail & Related papers (2021-06-20T20:03:52Z) - Probabilistic Simplex Component Analysis [66.30587591100566]
PRISM is a probabilistic simplex component analysis approach to identifying the vertices of a data-circumscribing simplex from data.
The problem has a rich variety of applications, the most notable being hyperspectral unmixing in remote sensing and non-negative matrix factorization in machine learning.
arXiv Detail & Related papers (2021-03-18T05:39:00Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Expectation propagation on the diluted Bayesian classifier [0.0]
We introduce a statistical mechanics inspired strategy that addresses the problem of sparse feature selection in the context of binary classification.
A computational scheme known as expectation propagation (EP) is used to train a continuous-weights perceptron learning a classification rule.
EP is a robust and competitive algorithm in terms of variable selection properties, estimation accuracy and computational complexity.
arXiv Detail & Related papers (2020-09-20T23:59:44Z) - Computational Barriers to Estimation from Low-Degree Polynomials [81.67886161671379]
We study the power of low-degrees for the task of detecting the presence of hidden structures.
For a large class of "signal plus noise" problems, we give a user-friendly lower bound for the best possible mean squared error achievable by any degree.
As applications, we give a tight characterization of the low-degree minimum mean squared error for the planted submatrix and planted dense subgraph problems.
arXiv Detail & Related papers (2020-08-05T17:52:10Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.