Practically Solving LPN in High Noise Regimes Faster Using Neural
Networks
- URL: http://arxiv.org/abs/2303.07987v1
- Date: Tue, 14 Mar 2023 15:44:20 GMT
- Title: Practically Solving LPN in High Noise Regimes Faster Using Neural
Networks
- Authors: Haozhe Jiang, Kaiyue Wen, Yilei Chen
- Abstract summary: We design families of two-layer neural networks that practically outperform classical algorithms in high-noise, low-dimension regimes.
Our algorithm can also be plugged into the hybrid algorithms for solving middle or large dimension instances.
- Score: 2.750124853532831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We conduct a systematic study of solving the learning parity with noise
problem (LPN) using neural networks. Our main contribution is designing
families of two-layer neural networks that practically outperform classical
algorithms in high-noise, low-dimension regimes. We consider three settings
where the numbers of LPN samples are abundant, very limited, and in between. In
each setting we provide neural network models that solve LPN as fast as
possible. For some settings we are also able to provide theories that explain
the rationale of the design of our models. Comparing with the previous
experiments of Esser, Kubler, and May (CRYPTO 2017), for dimension $n = 26$,
noise rate $\tau = 0.498$, the ''Guess-then-Gaussian-elimination'' algorithm
takes 3.12 days on 64 CPU cores, whereas our neural network algorithm takes 66
minutes on 8 GPUs. Our algorithm can also be plugged into the hybrid algorithms
for solving middle or large dimension LPN instances.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - On Statistical Learning of Branch and Bound for Vehicle Routing
Optimization [3.6922704509753084]
We train neural networks to emulate the decision-making process of the computationally expensive Strong Branching strategy.
We find that this approach can match or improve upon the performance of the branch and bound algorithm.
arXiv Detail & Related papers (2023-10-15T23:59:57Z) - Latent Space Representations of Neural Algorithmic Reasoners [15.920449080528536]
We perform a detailed analysis of the structure of the latent space induced by the GNN when executing algorithms.
We identify two possible failure modes: (i) loss of resolution, making it hard to distinguish similar values; (ii) inability to deal with values outside the range observed during training.
We show that these changes lead to improvements on the majority of algorithms in the standard CLRS-30 benchmark when using the state-of-the-art Triplet-GMPNN processor.
arXiv Detail & Related papers (2023-07-17T22:09:12Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Algorithms for Efficiently Learning Low-Rank Neural Networks [12.916132936159713]
We study algorithms for learning low-rank neural networks.
We present a provably efficient algorithm which learns an optimal low-rank approximation to a single-hidden-layer ReLU network.
We propose a novel low-rank framework for training low-rank $textitdeep$ networks.
arXiv Detail & Related papers (2022-02-02T01:08:29Z) - Revisiting Recursive Least Squares for Training Deep Neural Networks [10.44340837533087]
Recursive least squares (RLS) algorithms were once widely used for training small-scale neural networks, due to their fast convergence.
Previous RLS algorithms are unsuitable for training deep neural networks (DNNs), since they have high computational complexity and too many preconditions.
We propose three novel RLS optimization algorithms for training feedforward neural networks, convolutional neural networks and recurrent neural networks.
arXiv Detail & Related papers (2021-09-07T17:43:51Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Waypoint Planning Networks [66.72790309889432]
We propose a hybrid algorithm based on LSTMs with a local kernel - a classic algorithm such as A*, and a global kernel using a learned algorithm.
We compare WPN against A*, as well as related works including motion planning networks (MPNet) and value networks (VIN)
It is shown that WPN's search space is considerably less than A*, while being able to generate near optimal results.
arXiv Detail & Related papers (2021-05-01T18:02:01Z) - Neural Thompson Sampling [94.82847209157494]
We propose a new algorithm, called Neural Thompson Sampling, which adapts deep neural networks for both exploration and exploitation.
At the core of our algorithm is a novel posterior distribution of the reward, where its mean is the neural network approximator, and its variance is built upon the neural tangent features of the corresponding neural network.
arXiv Detail & Related papers (2020-10-02T07:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.