Storage and Learning phase transitions in the Random-Features Hopfield
Model
- URL: http://arxiv.org/abs/2303.16880v2
- Date: Fri, 28 Apr 2023 16:09:36 GMT
- Title: Storage and Learning phase transitions in the Random-Features Hopfield
Model
- Authors: Matteo Negri, Clarissa Lauditi, Gabriele Perugini, Carlo Lucibello,
Enrico Malatesta
- Abstract summary: The Hopfield model is a paradigmatic model of neural networks that has been analyzed for many decades in the statistical physics, neuroscience, and machine learning communities.
Inspired by the manifold hypothesis in machine learning, we propose and investigate a generalization of the standard setting that we name Random-Features Hopfield Model.
- Score: 9.489398590336643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Hopfield model is a paradigmatic model of neural networks that has been
analyzed for many decades in the statistical physics, neuroscience, and machine
learning communities. Inspired by the manifold hypothesis in machine learning,
we propose and investigate a generalization of the standard setting that we
name Random-Features Hopfield Model. Here $P$ binary patterns of length $N$ are
generated by applying to Gaussian vectors sampled in a latent space of
dimension $D$ a random projection followed by a non-linearity. Using the
replica method from statistical physics, we derive the phase diagram of the
model in the limit $P,N,D\to\infty$ with fixed ratios $\alpha=P/N$ and
$\alpha_D=D/N$. Besides the usual retrieval phase, where the patterns can be
dynamically recovered from some initial corruption, we uncover a new phase
where the features characterizing the projection can be recovered instead. We
call this phenomena the learning phase transition, as the features are not
explicitly given to the model but rather are inferred from the patterns in an
unsupervised fashion.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Neural Implicit Manifold Learning for Topology-Aware Density Estimation [15.878635603835063]
Current generative models learn $mathcalM$ by mapping an $m$-dimensional latent variable through a neural network.
We show that our model can learn manifold-supported distributions with complex topologies more accurately than pushforward models.
arXiv Detail & Related papers (2022-06-22T18:00:00Z) - Fundamental limits to learning closed-form mathematical models from data [0.0]
Given a noisy dataset, when is it possible to learn the true generating model from the data alone?
We show that this problem displays a transition from a low-noise phase in which the true model can be learned, to a phase in which the observation noise is too high for the true model to be learned by any method.
arXiv Detail & Related papers (2022-04-06T10:00:33Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Model-based Reinforcement Learning for Continuous Control with Posterior
Sampling [10.91557009257615]
We study model-based posterior sampling for reinforcement learning (PSRL) in continuous state-action spaces.
We present MPC-PSRL, a model-based posterior sampling algorithm with model predictive control for action selection.
arXiv Detail & Related papers (2020-11-20T21:00:31Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - The Generalized Lasso with Nonlinear Observations and Generative Priors [63.541900026673055]
We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models.
We show that our result can be extended to the uniform recovery guarantee under the assumption of a so-called local embedding property.
arXiv Detail & Related papers (2020-06-22T16:43:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.