Boltzmann Reinforcement Learning for Noise resilience in Analog Ising Machines
- URL: http://arxiv.org/abs/2602.09162v1
- Date: Mon, 09 Feb 2026 20:07:42 GMT
- Title: Boltzmann Reinforcement Learning for Noise resilience in Analog Ising Machines
- Authors: Aditya Choudhary, Saaketh Desai, Prasad Iyer,
- Abstract summary: We introduce BRAIN (Boltzmann Reinforcement for Analog Ising Networks), a distribution learning framework.<n>By shifting from state-by-state sampling to aggregating information across multiple noisy measurements, BRAIN is resilient to Gaussian noise.<n>BRAIN exhibits $mathcalO(N1.55)$ scaling up to 65,536 spins and maintains robustness against severe measurement uncertainty up to 40%.
- Score: 0.8739101659113154
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Analog Ising machines (AIMs) have emerged as a promising paradigm for combinatorial optimization, utilizing physical dynamics to solve Ising problems with high energy efficiency. However, the performance of traditional optimization and sampling algorithms on these platforms is often limited by inherent measurement noise. We introduce BRAIN (Boltzmann Reinforcement for Analog Ising Networks), a distribution learning framework that utilizes variational reinforcement learning to approximate the Boltzmann distribution. By shifting from state-by-state sampling to aggregating information across multiple noisy measurements, BRAIN is resilient to Gaussian noise characteristic of AIMs. We evaluate BRAIN across diverse combinatorial topologies, including the Curie-Weiss and 2D nearest-neighbor Ising systems. We find that under realistic 3\% Gaussian measurement noise, BRAIN maintains 98\% ground state fidelity, whereas Markov Chain Monte Carlo (MCMC) methods degrade to 51\% fidelity. Furthermore, BRAIN reaches the MCMC-equivalent solution up to 192x faster under these conditions. BRAIN exhibits $\mathcal{O}(N^{1.55})$ scaling up to 65,536 spins and maintains robustness against severe measurement uncertainty up to 40\%. Beyond ground state optimization, BRAIN accurately captures thermodynamic phase transitions and metastable states, providing a scalable and noise-resilient method for utilizing analog computing architectures in complex optimizations.
Related papers
- Extending Straight-Through Estimation for Robust Neural Networks on Analog CIM Hardware [5.100973962435092]
We propose a noise-aware training method for analog Compute-In-Memory (CIM) systems.<n>We decouple forward noise simulation from backward gradient computation, enabling noise-aware training with more accurate but computationally intractable noise modeling.<n>Our framework achieves up to 5.3% accuracy improvement on image classification, 0.72 perplexity reduction on text generation, 2.2$times$ speedup in training time, and 37.9% lower peak memory usage.
arXiv Detail & Related papers (2025-08-16T06:53:44Z) - Noise Hypernetworks: Amortizing Test-Time Compute in Diffusion Models [57.49136894315871]
New paradigm of test-time scaling has yielded remarkable breakthroughs in reasoning models and generative vision models.<n>We propose one solution to the problem of integrating test-time scaling knowledge into a model during post-training.<n>We replace reward guided test-time noise optimization in diffusion models with a Noise Hypernetwork that modulates initial input noise.
arXiv Detail & Related papers (2025-08-13T17:33:37Z) - Noise-reduction of multimode Gaussian Boson Sampling circuits via Unitary Averaging [41.94295877935867]
We improve Gaussian Boson Sampling (GBS) circuits by integrating the unitary averaging protocol.<n>We mitigate arbitrary interferometric noise, including beam-splitter and phase-shifter imperfections.<n>We derive a power-law formula predicting performance gains in large-scale systems.
arXiv Detail & Related papers (2025-06-06T04:28:06Z) - Optimization Strategies for Variational Quantum Algorithms in Noisy Landscapes [0.061173711613792085]
Variational Quantum Algorithms (VQAs) are a leading approach for near-term quantum computing.<n>We benchmarked more than fifty metaheuristic algorithms for the Variational Quantumsolver (VQE)<n>Results identify a small set of resilient algorithms for noisy VQE and provide guidance for optimization strategies on near-term quantum devices.
arXiv Detail & Related papers (2025-06-02T14:22:30Z) - Handling Label Noise via Instance-Level Difficulty Modeling and Dynamic Optimization [40.87754131017707]
Deep neural networks degrade in generalization performance under noisy supervision.<n>Existing methods focus on isolating clean subsets or correcting noisy labels.<n>We propose a novel two-stage noisy learning framework that enables instance-level optimization.
arXiv Detail & Related papers (2025-05-01T19:12:58Z) - Provable Accuracy Bounds for Hybrid Dynamical Optimization and Sampling [1.5551894637785635]
We provide non-asymptotic convergence guarantees for hybrid LNLS by reducing to block Langevin Diffusion (BLD) algorithms.<n>With finite device variation, we provide explicit bounds on the 2-Wasserstein bias in terms of step duration, noise strength, and function parameters.
arXiv Detail & Related papers (2024-10-08T22:03:41Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - Walking Noise: On Layer-Specific Robustness of Neural Architectures against Noisy Computations and Associated Characteristic Learning Dynamics [1.5184189132709105]
We discuss the implications of additive, multiplicative and mixed noise for different classification tasks and model architectures.
We propose a methodology called Walking Noise which injects layer-specific noise to measure the robustness.
We conclude with a discussion of the use of this methodology in practice, among others, discussing its use for tailored multi-execution in noisy environments.
arXiv Detail & Related papers (2022-12-20T17:09:08Z) - Learning based signal detection for MIMO systems with unknown noise
statistics [84.02122699723536]
This paper aims to devise a generalized maximum likelihood (ML) estimator to robustly detect signals with unknown noise statistics.
In practice, there is little or even no statistical knowledge on the system noise, which in many cases is non-Gaussian, impulsive and not analyzable.
Our framework is driven by an unsupervised learning approach, where only the noise samples are required.
arXiv Detail & Related papers (2021-01-21T04:48:15Z) - Modeling and mitigation of cross-talk effects in readout noise with
applications to the Quantum Approximate Optimization Algorithm [0.0]
Noise mitigation can be performed up to some error for which we derive upper bounds.
Experiments on 15 (23) qubits using IBM's devices to test both the noise model and the error-mitigation scheme.
We show that similar effects are expected for Haar-random quantum states and states generated by shallow-depth random circuits.
arXiv Detail & Related papers (2021-01-07T02:19:58Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.