Higher-Order Neuromorphic Ising Machines -- Autoencoders and Fowler-Nordheim Annealers are all you need for Scalability
- URL: http://arxiv.org/abs/2506.19964v1
- Date: Tue, 24 Jun 2025 19:17:02 GMT
- Title: Higher-Order Neuromorphic Ising Machines -- Autoencoders and Fowler-Nordheim Annealers are all you need for Scalability
- Authors: Faiek Ahsan, Saptarshi Maiti, Zihao Chen, Jakob Kaiser, Ankita Nandi, Madhuvanthi Srivatsav, Johannes Schemmel, Andreas G. Andreou, Jason Eshraghian, Chetan Singh Thakur, Shantanu Chakrabartty,
- Abstract summary: We report a higher-order neuromorphic Ising machine that exhibits superior scalability compared to architectures based on quadratization.<n>Asymptotic convergence to the Ising ground state is ensured by sampling autoencoder latent space defined by the spins.<n>We show that the techniques based on the sparsity of the interconnection matrix, such as graph coloring, can be effectively applied to higher-order neuromorphic Ising machines.
- Score: 6.455936422535347
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We report a higher-order neuromorphic Ising machine that exhibits superior scalability compared to architectures based on quadratization, while also achieving state-of-the-art quality and reliability in solutions with competitive time-to-solution metrics. At the core of the proposed machine is an asynchronous autoencoder architecture that captures higher-order interactions by directly manipulating Ising clauses instead of Ising spins, thereby maintaining resource complexity independent of interaction order. Asymptotic convergence to the Ising ground state is ensured by sampling the autoencoder latent space defined by the spins, based on the annealing dynamics of the Fowler-Nordheim quantum mechanical tunneling. To demonstrate the advantages of the proposed higher-order neuromorphic Ising machine, we systematically solved benchmark combinatorial optimization problems such as MAX-CUT and MAX-SAT, comparing the results to those obtained using a second-order Ising machine employing the same annealing process. Our findings indicate that the proposed architecture consistently provides higher quality solutions in shorter time frames compared to the second-order model across multiple runs. Additionally, we show that the techniques based on the sparsity of the interconnection matrix, such as graph coloring, can be effectively applied to higher-order neuromorphic Ising machines, enhancing the solution quality and the time-to-solution. The time-to-solution can be further improved through hardware co-design, as demonstrated in this paper using a field-programmable gate array (FPGA). The results presented in this paper provide further evidence that autoencoders and Fowler-Nordheim annealers are sufficient to achieve reliability and scaling of any-order neuromorphic Ising machines.
Related papers
- Solving the Hubbard model with Neural Quantum States [66.55653324211542]
We study the state-of-the-art results for the doped two-dimensional (2D) Hubbard model.<n>We find different attention heads in the NQS ansatz can directly encode correlations at different scales.<n>Our work establishes NQS as a powerful tool for solving challenging many-fermions systems.
arXiv Detail & Related papers (2025-07-03T14:08:25Z) - Efficient Optimization Accelerator Framework for Multistate Ising Problems [0.0]
Ising machines are a prominent class of hardware architectures that aim to solve NP-hard optimization problems.<n>We model the spin interactions as a generalized logic function to significantly reduce the exploration space.<n>We also design a 1024-neuron all-to-all connected probabilistic Ising accelerator that shows up to 10000x performance acceleration.
arXiv Detail & Related papers (2025-05-26T17:23:47Z) - Parallel Ising Annealer via Gradient-based Hamiltonian Monte Carlo [11.307633403964031]
Ising annealer is a quantum-inspired computing architecture for optimization problems.
Main innovation is the fusion of an approximate gradient-based approach into the Ising annealer.
Prototype annealer solves Ising problems of both integer and fraction coefficients with up 200 spins on a single low-cost FPGA board.
arXiv Detail & Related papers (2024-07-14T13:51:35Z) - ON-OFF Neuromorphic ISING Machines using Fowler-Nordheim Annealers [4.429465736433621]
We introduce NeuroSA, a neuromorphic architecture specifically designed to ensure convergence to the ground state of an Ising problem.<n>Across multiple runs, NeuroSA consistently generates solutions that are concentrated around the state-of-the-art results (within 99%) or surpass the current state-of-the-art solutions for Max Independent Set benchmarks.<n>For practical illustration, we present results from an implementation of NeuroSA on the SpiNNaker2 platform.
arXiv Detail & Related papers (2024-06-07T19:18:09Z) - Real-Time Image Segmentation via Hybrid Convolutional-Transformer Architecture Search [51.89707241449435]
In this paper, we address the challenge of integrating multi-head self-attention into high-resolution representation CNNs efficiently.<n>We develop a multi-target multi-branch supernet method, which fully utilizes the advantages of high-resolution features.<n>We present a series of models via the Hybrid Convolutional-Transformer Architecture Search (HyCTAS) method that searches for the best hybrid combination of light-weight convolution layers and memory-efficient self-attention layers.
arXiv Detail & Related papers (2024-03-15T15:47:54Z) - Efficient Optimization with Higher-Order Ising Machines [5.697064222465131]
We show that higher-order Ising machines can solve satisfiability problems more resource-efficiently than traditional second-order Ising machines.
Our results improve the current state-of-the-art for Ising machines.
arXiv Detail & Related papers (2022-12-07T03:18:05Z) - SRU++: Pioneering Fast Recurrence with Attention for Speech Recognition [49.42625022146008]
We present the advantages of applying SRU++ in ASR tasks by comparing with Conformer across multiple ASR benchmarks.
Specifically, SRU++ can surpass Conformer on long-form speech input with a large margin, based on our analysis.
arXiv Detail & Related papers (2021-10-11T19:23:50Z) - Communication-Computation Efficient Device-Edge Co-Inference via AutoML [4.06604174802643]
Device-edge co-inference partitions a deep neural network between a resource-constrained mobile device and an edge server.
On-device model sparsity level and intermediate feature compression ratio have direct impacts on workload and communication overhead.
We propose a novel automated machine learning (AutoML) framework based on deep reinforcement learning (DRL)
arXiv Detail & Related papers (2021-08-30T06:36:30Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - Machine Learning Framework for Quantum Sampling of Highly-Constrained,
Continuous Optimization Problems [101.18253437732933]
We develop a generic, machine learning-based framework for mapping continuous-space inverse design problems into surrogate unconstrained binary optimization problems.
We showcase the framework's performance on two inverse design problems by optimizing thermal emitter topologies for thermophotovoltaic applications and (ii) diffractive meta-gratings for highly efficient beam steering.
arXiv Detail & Related papers (2021-05-06T02:22:23Z) - ASFD: Automatic and Scalable Face Detector [129.82350993748258]
We propose a novel Automatic and Scalable Face Detector (ASFD)
ASFD is based on a combination of neural architecture search techniques as well as a new loss design.
Our ASFD-D6 outperforms the prior strong competitors, and our lightweight ASFD-D0 runs at more than 120 FPS with Mobilenet for VGA-resolution images.
arXiv Detail & Related papers (2020-03-25T06:00:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.