Self-averaging of digital memcomputing machines
- URL: http://arxiv.org/abs/2301.08787v2
- Date: Thu, 7 Sep 2023 19:00:26 GMT
- Title: Self-averaging of digital memcomputing machines
- Authors: Daniel Primosch, Yuan-Hang Zhang and Massimiliano Di Ventra
- Abstract summary: Digital memcomputing machines (DMMs) are a new class of computing machines that employ non-quantum dynamical systems with memory to solve optimization problems.
We show that the time to solution (TTS) of DMMs follows an inverse Gaussian distribution, with the TTS self-averaging with increasing problem size.
We provide both an analytical understanding of this phenomenon and numerical evidence by solving instances of the 3-SAT (satisfiability) problem.
- Score: 4.429709236737154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Digital memcomputing machines (DMMs) are a new class of computing machines
that employ non-quantum dynamical systems with memory to solve combinatorial
optimization problems. Here, we show that the time to solution (TTS) of DMMs
follows an inverse Gaussian distribution, with the TTS self-averaging with
increasing problem size, irrespective of the problem they solve. We provide
both an analytical understanding of this phenomenon and numerical evidence by
solving instances of the 3-SAT (satisfiability) problem. The self-averaging
property of DMMs with problem size implies that they are increasingly
insensitive to the detailed features of the instances they solve. This is in
sharp contrast to traditional algorithms applied to the same problems,
illustrating another advantage of this physics-based approach to computation.
Related papers
- On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Automatic Data Augmentation via Invariance-Constrained Learning [94.27081585149836]
Underlying data structures are often exploited to improve the solution of learning tasks.
Data augmentation induces these symmetries during training by applying multiple transformations to the input data.
This work tackles these issues by automatically adapting the data augmentation while solving the learning task.
arXiv Detail & Related papers (2022-09-29T18:11:01Z) - AI-enhanced iterative solvers for accelerating the solution of large
scale parametrized linear systems of equations [0.0]
This paper exploits up-to-date ML tools and delivers customized iterative solvers of linear equation systems.
The results indicate its superiority over conventional iterative solution schemes.
arXiv Detail & Related papers (2022-07-06T09:47:14Z) - PI-VAE: Physics-Informed Variational Auto-Encoder for stochastic
differential equations [2.741266294612776]
We propose a new class of physics-informed neural networks, called physics-informed Variational Autoencoder (PI-VAE)
PI-VAE consists of a variational autoencoder (VAE), which generates samples of system variables and parameters.
The satisfactory accuracy and efficiency of the proposed method are numerically demonstrated in comparison with physics-informed generative adversarial network (PI-WGAN)
arXiv Detail & Related papers (2022-03-21T21:51:19Z) - Adiabatic Quantum Computing for Multi Object Tracking [170.8716555363907]
Multi-Object Tracking (MOT) is most often approached in the tracking-by-detection paradigm, where object detections are associated through time.
As these optimization problems are often NP-hard, they can only be solved exactly for small instances on current hardware.
We show that our approach is competitive compared with state-of-the-art optimization-based approaches, even when using of-the-shelf integer programming solvers.
arXiv Detail & Related papers (2022-02-17T18:59:20Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - Directed percolation and numerical stability of simulations of digital
memcomputing machines [8.761355402590105]
Digital memcomputing machines (DMMs) are a novel, non-solving class of machines designed to solve optimization problems.
These machines can be physically realized with continuous-time, non-quantum dynamical systems with memory.
Solutions of many hard problems have been reported by numerically integrating the ODEs of DMMs.
arXiv Detail & Related papers (2021-02-06T09:44:28Z) - Non-intrusive surrogate modeling for parametrized time-dependent PDEs
using convolutional autoencoders [0.0]
We present a non-intrusive surrogate modeling scheme based on machine learning for predictive modeling of complex, systems by parametrized timedependent PDEs.
We use a convolutional autoencoder in conjunction with a feed forward neural network to establish a low-cost and accurate mapping from problem's parametric space to its solution space.
arXiv Detail & Related papers (2021-01-14T11:34:58Z) - Machine Number Sense: A Dataset of Visual Arithmetic Problems for
Abstract and Relational Reasoning [95.18337034090648]
We propose a dataset, Machine Number Sense (MNS), consisting of visual arithmetic problems automatically generated using a grammar model--And-Or Graph (AOG)
These visual arithmetic problems are in the form of geometric figures.
We benchmark the MNS dataset using four predominant neural network models as baselines in this visual reasoning task.
arXiv Detail & Related papers (2020-04-25T17:14:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.