Fast semidefinite programming with feedforward neural networks
- URL: http://arxiv.org/abs/2011.05785v2
- Date: Thu, 12 Nov 2020 10:53:36 GMT
- Title: Fast semidefinite programming with feedforward neural networks
- Authors: Tam\'as Kriv\'achy, Yu Cai, Joseph Bowles, Daniel Cavalcanti and
Nicolas Brunner
- Abstract summary: We propose to solve feasibility semidefinite programs using artificial neural networks.
We train the network without having to exactly solve the semidefinite program even once.
We demonstrate that the trained neural network gives decent accuracy, while showing orders of magnitude increase in speed compared to a traditional solver.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semidefinite programming is an important optimization task, often used in
time-sensitive applications. Though they are solvable in polynomial time, in
practice they can be too slow to be used in online, i.e. real-time
applications. Here we propose to solve feasibility semidefinite programs using
artificial neural networks. Given the optimization constraints as an input, a
neural network outputs values for the optimization parameters such that the
constraints are satisfied, both for the primal and the dual formulations of the
task. We train the network without having to exactly solve the semidefinite
program even once, thus avoiding the possibly time-consuming task of having to
generate many training samples with conventional solvers. The neural network
method is only inconclusive if both the primal and dual models fail to provide
feasible solutions. Otherwise we always obtain a certificate, which guarantees
false positives to be excluded. We examine the performance of the method on a
hierarchy of quantum information tasks, the Navascu\'es-Pironio-Ac\'in
hierarchy applied to the Bell scenario. We demonstrate that the trained neural
network gives decent accuracy, while showing orders of magnitude increase in
speed compared to a traditional solver.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Training Multi-layer Neural Networks on Ising Machine [41.95720316032297]
This paper proposes an Ising learning algorithm to train quantized neural network (QNN)
As far as we know, this is the first algorithm to train multi-layer feedforward networks on Ising machines.
arXiv Detail & Related papers (2023-11-06T04:09:15Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Optimal Stopping via Randomized Neural Networks [6.677219861416146]
This paper presents the benefits of using randomized neural networks instead of standard basis functions or deep neural networks.
Our approaches are applicable to high dimensional problems where the existing approaches become increasingly impractical.
In all cases, our algorithms outperform the state-of-the-art and other relevant machine learning approaches in terms of time.
arXiv Detail & Related papers (2021-04-28T09:47:21Z) - A Projection Algorithm for the Unitary Weights [0.0]
Unitary neural networks are promising alternatives for solving the exploding and vanishing activation/gradient problem.
They often require longer training time due to the additional unitary constraints on their weight matrices.
Here we show a novel algorithm using a backpropagation technique with Lie algebra for computing approximated unitary weights from their pre-trained, non-unitary counterparts.
arXiv Detail & Related papers (2021-02-19T17:33:17Z) - Neural Spectrahedra and Semidefinite Lifts: Global Convex Optimization
of Polynomial Activation Neural Networks in Fully Polynomial-Time [31.94590517036704]
We develop exact convex optimization formulations for two-layer numerical networks with second degree activations.
We show that semidefinite neural and therefore global optimization is in complexity dimension and sample size for all input data.
The proposed approach is significantly faster to obtain better test accuracy compared to the standard backpropagation procedure.
arXiv Detail & Related papers (2021-01-07T08:43:01Z) - ODEN: A Framework to Solve Ordinary Differential Equations using
Artificial Neural Networks [0.0]
We prove a specific loss function, which does not require knowledge of the exact solution, to evaluate neural networks' performance.
Neural networks are shown to be proficient at approximating continuous solutions within their training domains.
A user-friendly and adaptable open-source code (ODE$mathcalN$) is provided on GitHub.
arXiv Detail & Related papers (2020-05-28T15:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.