Sampling binary sparse coding QUBO models using a spiking neuromorphic
processor
- URL: http://arxiv.org/abs/2306.01940v2
- Date: Wed, 2 Aug 2023 16:55:29 GMT
- Title: Sampling binary sparse coding QUBO models using a spiking neuromorphic
processor
- Authors: Kyle Henke, Elijah Pelofske, Georg Hahn, Garrett T. Kenyon
- Abstract summary: We consider the problem of computing a binary representation of an image.
We aim to find a binary vector minimal set of basis that when added together best reconstruct the given input.
This yields a so-called Quadratic Unconstrained Binary (QUBO) problem.
- Score: 3.0586855806896045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of computing a sparse binary representation of an
image. To be precise, given an image and an overcomplete, non-orthonormal
basis, we aim to find a sparse binary vector indicating the minimal set of
basis vectors that when added together best reconstruct the given input. We
formulate this problem with an $L_2$ loss on the reconstruction error, and an
$L_0$ (or, equivalently, an $L_1$) loss on the binary vector enforcing
sparsity. This yields a so-called Quadratic Unconstrained Binary Optimization
(QUBO) problem, whose solution is generally NP-hard to find. The contribution
of this work is twofold. First, the method of unsupervised and unnormalized
dictionary feature learning for a desired sparsity level to best match the data
is presented. Second, the binary sparse coding problem is then solved on the
Loihi 1 neuromorphic chip by the use of stochastic networks of neurons to
traverse the non-convex energy landscape. The solutions are benchmarked against
the classical heuristic simulated annealing. We demonstrate neuromorphic
computing is suitable for sampling low energy solutions of binary sparse coding
QUBO models, and although Loihi 1 is capable of sampling very sparse solutions
of the QUBO models, there needs to be improvement in the implementation in
order to be competitive with simulated annealing.
Related papers
- An encoding of argumentation problems using quadratic unconstrained binary optimization [1.104960878651584]
We develop a way to encode several NP-Complete problems in Abstract Argumentation to Quadratic Unconstrained Binary Optimization problems.
With the QUBO formulation, exploiting new computing architectures, such as Quantum and Digital Annealers, is possible.
We performed tests to prove the correctness and applicability of classical problems in Argumentation and enforcement of argument sets.
arXiv Detail & Related papers (2024-09-09T11:29:46Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Comparing Quantum Annealing and Spiking Neuromorphic Computing for Sampling Binary Sparse Coding QUBO Problems [2.249916681499244]
Given an image and an overcomplete, non-orthonormal basis, we aim to find a sparse binary vector indicating the minimal set of vectors that when added together best reconstruct the given input.
This yields a quadratic binary optimization problem (QUBO), whose optimal solution(s) in general is NP-hard to find.
Next, we solve the sparse representation QUBO by implementing it both on a D-Wave annealer with Pegasus chip connectivity via minor embedding, as well as on the Intel Loihi 2 spiking neuromorphic processor.
arXiv Detail & Related papers (2024-05-30T22:56:15Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - A Novel Normalized-Cut Solver with Nearest Neighbor Hierarchical
Initialization [107.07093621337084]
Normalized-Cut (N-Cut) is a famous model of spectral clustering.
Traditional N-Cut solvers are two-stage: 1) calculating the continuous spectral embedding of normalized Laplacian matrix; 2) discretization via $K$-means or spectral rotation.
We propose a novel N-Cut solver based on the famous coordinate descent method.
arXiv Detail & Related papers (2023-11-26T07:11:58Z) - Training Multi-layer Neural Networks on Ising Machine [41.95720316032297]
This paper proposes an Ising learning algorithm to train quantized neural network (QNN)
As far as we know, this is the first algorithm to train multi-layer feedforward networks on Ising machines.
arXiv Detail & Related papers (2023-11-06T04:09:15Z) - Quantum Sparse Coding [5.130440339897477]
We develop a quantum-inspired algorithm for sparse coding.
The emergence of quantum computers and Ising machines can potentially lead to more accurate estimations.
We conduct numerical experiments with simulated data on Lightr's quantum-inspired digital platform.
arXiv Detail & Related papers (2022-09-08T13:00:30Z) - Multigrid-augmented deep learning preconditioners for the Helmholtz
equation [4.18804572788063]
We present a data-driven approach to solve the discrete heterogeneous Helmholtz equation at high wavenumbers.
We combine classical iterative solvers with convolutional neural networks (CNNs) to form a preconditioner which is applied within a Krylov solver.
arXiv Detail & Related papers (2022-03-14T10:31:11Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - SiMaN: Sign-to-Magnitude Network Binarization [165.5630656849309]
We show that our weight binarization provides an analytical solution by encoding high-magnitude weights into +1s, and 0s otherwise.
We prove that the learned weights of binarized networks roughly follow a Laplacian distribution that does not allow entropy.
Our method, dubbed sign-to- neural network binarization (SiMaN), is evaluated on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2021-02-16T07:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.