Low-rank combinatorial optimization and statistical learning by spatial
photonic Ising machine
- URL: http://arxiv.org/abs/2303.14993v2
- Date: Tue, 8 Aug 2023 03:37:18 GMT
- Title: Low-rank combinatorial optimization and statistical learning by spatial
photonic Ising machine
- Authors: Hiroshi Yamashita, Ken-ichi Okubo, Suguru Shimomura, Yusuke Ogura, Jun
Tanida, Hideyuki Suzuki
- Abstract summary: We propose a new computing model for the SPIM that can accommodate any Ising problem without changing its optical implementation.
The proposed model is particularly efficient for Ising problems with low-rank interaction matrices, such as knapsack problems.
We demonstrate that learning, classification, and sampling of the MNIST handwritten digit images are achieved efficiently using the model with low-rank interactions.
- Score: 0.44040106718326594
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The spatial photonic Ising machine (SPIM) [D. Pierangeli et al., Phys. Rev.
Lett. 122, 213902 (2019)] is a promising optical architecture utilizing spatial
light modulation for solving large-scale combinatorial optimization problems
efficiently. The primitive version of the SPIM, however, can accommodate Ising
problems with only rank-one interaction matrices. In this Letter, we propose a
new computing model for the SPIM that can accommodate any Ising problem without
changing its optical implementation. The proposed model is particularly
efficient for Ising problems with low-rank interaction matrices, such as
knapsack problems. Moreover, it acquires the learning ability of Boltzmann
machines. We demonstrate that learning, classification, and sampling of the
MNIST handwritten digit images are achieved efficiently using the model with
low-rank interactions. Thus, the proposed model exhibits higher practical
applicability to various problems of combinatorial optimization and statistical
learning, without losing the scalability inherent in the SPIM architecture.
Related papers
- A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - Encoding arbitrary Ising Hamiltonians on Spatial Photonic Ising Machines [0.0]
We introduce and experimentally validate a SPIM instance that enables direct control over the full interaction matrix.
We demonstrate the conformity of the experimentally measured Ising energy with the theoretically expected values and then proceed to solve both the unweighted and weighted graph problems.
Our approach greatly expands the applicability of SPIMs for real-world applications without sacrificing any of the inherent advantages of the system.
arXiv Detail & Related papers (2024-07-12T10:54:07Z) - Efficient Computation Using Spatial-Photonic Ising Machines: Utilizing Low-Rank and Circulant Matrix Constraints [0.0]
spatial-photonic Ising machines (SPIMs) address computationally intensive Ising problems that employ low-rank and circulant coupling matrices.
Our results indicate that the performance of SPIMs is critically affected by the rank and precision of the coupling matrices.
arXiv Detail & Related papers (2024-06-03T15:03:31Z) - L0-regularized compressed sensing with Mean-field Coherent Ising Machines [0.8292466835099597]
We propose the mean-field CIM model, which is a physics-inspired solver without quantum noise.
Our results indicate that the proposed model has similar performance to physically accurate SDEs in both artificial and magnetic resonance imaging data.
arXiv Detail & Related papers (2024-05-01T07:43:26Z) - Self-Improved Learning for Scalable Neural Combinatorial Optimization [15.842155380912002]
This work proposes a novel Self-Improved Learning (SIL) method for better scalability of neural optimization.
We develop an efficient self-improved mechanism that enables direct model training on large-scale problem instances without any labeled data.
In addition, we design a linear attention complexity mechanism for the computation model to efficiently handle large-scale problem instances with low overhead.
arXiv Detail & Related papers (2024-03-28T16:46:53Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Complexity continuum within Ising formulation of NP problems [0.0]
Minimisation of the Ising Hamiltonian is known to be NP-hard problem for certain interaction matrix classes.
We propose to identify computationally simple instances with an optimisation simplicity criterion'
Such simplicity can be found for a wide range of models from spin glasses to k-regular maximum cut problems.
arXiv Detail & Related papers (2020-08-02T11:36:38Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.