Thermodynamics-Inspired Computing with Oscillatory Neural Networks for Inverse Matrix Computation
- URL: http://arxiv.org/abs/2507.22544v1
- Date: Wed, 30 Jul 2025 10:16:55 GMT
- Title: Thermodynamics-Inspired Computing with Oscillatory Neural Networks for Inverse Matrix Computation
- Authors: George Tsormpatzoglou, Filip Sabo, Aida Todri-Sanial,
- Abstract summary: ONNs have been widely studied as Ising machines for tackling complex optimization problems.<n>This work investigates their feasibility in solving linear algebra problems, specifically the inverse matrix.
- Score: 0.4887814315732678
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe a thermodynamic-inspired computing paradigm based on oscillatory neural networks (ONNs). While ONNs have been widely studied as Ising machines for tackling complex combinatorial optimization problems, this work investigates their feasibility in solving linear algebra problems, specifically the inverse matrix. Grounded in thermodynamic principles, we analytically demonstrate that the linear approximation of the coupled Kuramoto oscillator model leads to the inverse matrix solution. Numerical simulations validate the theoretical framework, and we examine the parameter regimes that computation has the highest accuracy.
Related papers
- Self-Supervised Coarsening of Unstructured Grid with Automatic Differentiation [55.88862563823878]
In this work, we present an original algorithm to coarsen an unstructured grid based on the concepts of differentiable physics.<n>We demonstrate performance of the algorithm on two PDEs: a linear equation which governs slightly compressible fluid flow in porous media and the wave equation.<n>Our results show that in the considered scenarios, we reduced the number of grid points up to 10 times while preserving the modeled variable dynamics in the points of interest.
arXiv Detail & Related papers (2025-07-24T11:02:13Z) - Maximum-likelihood Estimators in Physics-Informed Neural Networks for
High-dimensional Inverse Problems [0.0]
Physics-informed neural networks (PINNs) have proven a suitable mathematical scaffold for solving inverse ordinary (ODE) and partial differential equations (PDE)
In this work, we demonstrate that inverse PINNs can be framed in terms of maximum-likelihood estimators (MLE) to allow explicit error propagation from to the physical model space through Taylor expansion.
arXiv Detail & Related papers (2023-04-12T17:15:07Z) - Linear combination of Hamiltonian simulation for nonunitary dynamics
with optimal state preparation cost [8.181184006712785]
We propose a simple method for simulating a general class of non-unitary dynamics as a linear combination of Hamiltonian simulation problems.
We also demonstrate an application for open quantum dynamics simulation using the complex absorbing potential method with near-optimal dependence on all parameters.
arXiv Detail & Related papers (2023-03-02T07:37:54Z) - On the application of Sylvester's law of inertia to QUBO formulations
for systems of linear equations [0.2538209532048866]
We develop the QUBO formulations of systems of linear equations by applying Sylvester's law of inertia.
We expect that the proposed algorithm can effectively implement higher dimensional systems of linear equations on a quantum computer.
arXiv Detail & Related papers (2021-11-19T07:55:10Z) - On the application of matrix congruence to QUBO formulations for systems
of linear equations [0.505645669728935]
Recent studies on quantum computing algorithms focus on excavating features of quantum computers which have potential for contributing to computational model enhancements.
In this paper, we simplify quadratic unconstrained binary optimization (QUBO) formulations of systems of linear equations by exploiting congruence of real symmetric matrices to diagonal matrices.
We further exhibit computational merits of the proposed QUBO models, which can outperform classical algorithms such as QR and SVD decomposition.
arXiv Detail & Related papers (2021-11-01T07:52:01Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Fixed Depth Hamiltonian Simulation via Cartan Decomposition [59.20417091220753]
We present a constructive algorithm for generating quantum circuits with time-independent depth.
We highlight our algorithm for special classes of models, including Anderson localization in one dimensional transverse field XY model.
In addition to providing exact circuits for a broad set of spin and fermionic models, our algorithm provides broad analytic and numerical insight into optimal Hamiltonian simulations.
arXiv Detail & Related papers (2021-04-01T19:06:00Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Quantum Algorithm for Simulating Hamiltonian Dynamics with an
Off-diagonal Series Expansion [1.0152838128195467]
We propose an efficient quantum algorithm for simulating the dynamics of general Hamiltonian systems.
Our method has an optimal dependence on the desired precision and, as we illustrate, generally requires considerably fewer resources than the current state-of-the-art.
arXiv Detail & Related papers (2020-06-03T21:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.