Capacity of the Hebbian-Hopfield network associative memory
- URL: http://arxiv.org/abs/2403.01907v1
- Date: Mon, 4 Mar 2024 10:10:23 GMT
- Title: Capacity of the Hebbian-Hopfield network associative memory
- Authors: Mihailo Stojnic
- Abstract summary: In citeHop82, Hopfield introduced a emphHebbian learning rule based neural network model and suggested how it can efficiently operate as an associative memory.
We study two famous pattern's basins of attraction: textbfemph(i) The AGS one from citeAmiGutSom85; and textbfemph(ii) The NLT one from citeNewman88,Louk94,Louk94a,Louk97,Tal
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In \cite{Hop82}, Hopfield introduced a \emph{Hebbian} learning rule based
neural network model and suggested how it can efficiently operate as an
associative memory. Studying random binary patterns, he also uncovered that, if
a small fraction of errors is tolerated in the stored patterns retrieval, the
capacity of the network (maximal number of memorized patterns, $m$) scales
linearly with each pattern's size, $n$. Moreover, he famously predicted
$\alpha_c=\lim_{n\rightarrow\infty}\frac{m}{n}\approx 0.14$. We study this very
same scenario with two famous pattern's basins of attraction:
\textbf{\emph{(i)}} The AGS one from \cite{AmiGutSom85}; and
\textbf{\emph{(ii)}} The NLT one from
\cite{Newman88,Louk94,Louk94a,Louk97,Tal98}. Relying on the \emph{fully lifted
random duality theory} (fl RDT) from \cite{Stojnicflrdt23}, we obtain the
following explicit capacity characterizations on the first level of lifting:
\begin{equation}
\alpha_c^{(AGS,1)} = \left ( \max_{\delta\in \left ( 0,\frac{1}{2}\right )
}\frac{1-2\delta}{\sqrt{2} \mbox{erfinv} \left ( 1-2\delta\right )} -
\frac{2}{\sqrt{2\pi}} e^{-\left ( \mbox{erfinv}\left ( 1-2\delta \right )\right
)^2}\right )^2 \approx \mathbf{0.137906} \end{equation}
\begin{equation}
\alpha_c^{(NLT,1)} = \frac{\mbox{erf}(x)^2}{2x^2}-1+\mbox{erf}(x)^2 \approx
\mathbf{0.129490}, \quad 1-\mbox{erf}(x)^2-
\frac{2\mbox{erf}(x)e^{-x^2}}{\sqrt{\pi}x}+\frac{2e^{-2x^2}}{\pi}=0.
\end{equation}
A substantial numerical work gives on the second level of lifting
$\alpha_c^{(AGS,2)} \approx \mathbf{0.138186}$ and $\alpha_c^{(NLT,2)} \approx
\mathbf{0.12979}$, effectively uncovering a remarkably fast lifting
convergence. Moreover, the obtained AGS characterizations exactly match the
replica symmetry based ones of \cite{AmiGutSom85} and the corresponding
symmetry breaking ones of \cite{SteKuh94}.
Related papers
- Sharp Gap-Dependent Variance-Aware Regret Bounds for Tabular MDPs [54.28273395444243]
We show that the Monotonic Value Omega (MVP) algorithm achieves a variance-aware gap-dependent regret bound of $$tildeOleft(left(sum_Delta_h(s,a)>0 fracH2 log K land mathttVar_maxtextc$.
arXiv Detail & Related papers (2025-06-06T20:33:57Z) - On the $O(\rac{\sqrt{d}}{K^{1/4}})$ Convergence Rate of AdamW Measured by $\ell_1$ Norm [54.28350823319057]
This paper establishes the convergence rate $frac1Ksum_k=1KEleft[|nabla f(xk)|_1right]leq O(fracsqrtdCK1/4) for AdamW measured by $ell_$ norm, where $K$ represents the iteration number, $d denotes the model dimension, and $C$ matches the constant in the optimal convergence rate of SGD.
arXiv Detail & Related papers (2025-05-17T05:02:52Z) - MLPs at the EOC: Concentration of the NTK [7.826806223782053]
We study the concentration of the Neural Tangent (NTK) $K_theta.
We prove that an approximate version of gradient independence holds at finite width.
Our results imply that in order to accurately approximate the limit, hidden layer widths have to grow quadratically as $m_k = k2 m$ for some $m in bbN+1$ for sufficient concentration.
arXiv Detail & Related papers (2025-01-24T18:58:50Z) - Learning a Single Neuron Robustly to Distributional Shifts and Adversarial Label Noise [38.551072383777594]
We study the problem of learning a single neuron with respect to the $L2$ loss in the presence of adversarial distribution shifts.
A new algorithm is developed to approximate the vector vector squared loss with respect to the worst distribution that is in the $chi2$divergence to the $mathcalp_0$.
arXiv Detail & Related papers (2024-11-11T03:43:52Z) - Efficient Continual Finite-Sum Minimization [52.5238287567572]
We propose a key twist into the finite-sum minimization, dubbed as continual finite-sum minimization.
Our approach significantly improves upon the $mathcalO(n/epsilon)$ FOs that $mathrmStochasticGradientDescent$ requires.
We also prove that there is no natural first-order method with $mathcalOleft(n/epsilonalpharight)$ complexity gradient for $alpha 1/4$, establishing that the first-order complexity of our method is nearly tight.
arXiv Detail & Related papers (2024-06-07T08:26:31Z) - On the quantum Guerra-Morato Action Functional [0.0]
Given a smooth potential $W:mathrmTn to mathbbR$ on the torus, the Quantum Guerra-Morato action functional is given by smallskip.
We show that the expression for the second variation of a critical solution is given by smallskip.
arXiv Detail & Related papers (2024-03-09T10:30:21Z) - Exact objectives of random linear programs and mean widths of random
polyhedrons [0.0]
We consider emphrandom linear programs (rlps) as a subclass of emphrandom optimization problems (rops)
Our particular focus is on appropriate linear objectives which connect the rlps to the mean widths of random polyhedrons/polytopes.
arXiv Detail & Related papers (2024-03-06T11:51:52Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - On the $O(\frac{\sqrt{d}}{T^{1/4}})$ Convergence Rate of RMSProp and Its Momentum Extension Measured by $\ell_1$ Norm [59.65871549878937]
This paper considers the RMSProp and its momentum extension and establishes the convergence rate of $frac1Tsum_k=1T.
Our convergence rate matches the lower bound with respect to all the coefficients except the dimension $d$.
Our convergence rate can be considered to be analogous to the $frac1Tsum_k=1T.
arXiv Detail & Related papers (2024-02-01T07:21:32Z) - Spectral Statistics of the Sample Covariance Matrix for High Dimensional
Linear Gaussians [12.524855369455421]
Performance of ordinary least squares(OLS) method for the emphestimation of high dimensional stable state transition matrix.
OLS estimator incurs a emphphase transition and becomes emphtransient: increasing only worsens estimation error.
arXiv Detail & Related papers (2023-12-10T06:55:37Z) - Convergence Rates of Stochastic Zeroth-order Gradient Descent for \L
ojasiewicz Functions [6.137707924685666]
We prove convergence rates of Zeroth-order Gradient Descent (SZGD) algorithms for Lojasiewicz functions.
Our results show that $ f (mathbfx_t) - f (mathbfx_infty) _t in mathbbN $ can converge faster than $ | mathbfx_infty.
arXiv Detail & Related papers (2022-10-31T00:53:17Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - On the Self-Penalization Phenomenon in Feature Selection [69.16452769334367]
We describe an implicit sparsity-inducing mechanism based on over a family of kernels.
As an application, we use this sparsity-inducing mechanism to build algorithms consistent for feature selection.
arXiv Detail & Related papers (2021-10-12T09:36:41Z) - DIPPA: An improved Method for Bilinear Saddle Point Problems [18.65143269806133]
This paper deals with point dependency problems $min_bfx max_bfy g(fracx) + bfxtop bfbftop fracbfA kappa_x kappa_x (kappa_x + kappa_y) kappa_y (kappa_x + kappa_y) kappa_y (kappa_x + kappa_y)
arXiv Detail & Related papers (2021-03-15T10:55:30Z) - Optimal Mean Estimation without a Variance [103.26777953032537]
We study the problem of heavy-tailed mean estimation in settings where the variance of the data-generating distribution does not exist.
We design an estimator which attains the smallest possible confidence interval as a function of $n,d,delta$.
arXiv Detail & Related papers (2020-11-24T22:39:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.