Arithmetic circuit tensor networks, multivariable function
representation, and high-dimensional integration
- URL: http://arxiv.org/abs/2209.07410v1
- Date: Tue, 16 Aug 2022 23:02:14 GMT
- Title: Arithmetic circuit tensor networks, multivariable function
representation, and high-dimensional integration
- Authors: Ruojing Peng, Johnnie Gray, Garnet Kin-Lic Chan
- Abstract summary: We introduce a direct mapping from the arithmetic circuit of a function to circuit tensor networks.
We demonstrate the power of the circuit construction in examples of multivariable integration on the unit hypercube in up to 50 dimensions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many computational problems can be formulated in terms of high-dimensional
functions. Simple representations of such functions and resulting computations
with them typically suffer from the "curse of dimensionality", an exponential
cost dependence on dimension. Tensor networks provide a way to represent
certain classes of high-dimensional functions with polynomial memory. This
results in computations where the exponential cost is ameliorated or in some
cases, removed, if the tensor network representation can be obtained. Here, we
introduce a direct mapping from the arithmetic circuit of a function to
arithmetic circuit tensor networks, avoiding the need to perform any
optimization or functional fit. We demonstrate the power of the circuit
construction in examples of multivariable integration on the unit hypercube in
up to 50 dimensions, where the complexity of integration can be understood from
the circuit structure. We find very favorable cost scaling compared to
quasi-Monte-Carlo integration for these cases, and further give an example
where efficient quasi-Monte-Carlo cannot be theoretically performed without
knowledge of the underlying tensor network circuit structure.
Related papers
- Compressing multivariate functions with tree tensor networks [0.0]
One-dimensional tensor networks are increasingly being used as a numerical ansatz for continuum functions.
We show how more structured tree tensor networks offer a significantly more efficient ansatz than the commonly used tensor train.
arXiv Detail & Related papers (2024-10-04T16:20:52Z) - Chebyshev approximation and composition of functions in matrix product states for quantum-inspired numerical analysis [0.0]
It proposes an algorithm that employs iterative Chebyshev expansions and Clenshaw evaluations to represent analytic and highly differentiable functions as MPS Chebyshev interpolants.
It demonstrates rapid convergence for highly-differentiable functions, aligning with theoretical predictions, and generalizes efficiently to multidimensional scenarios.
arXiv Detail & Related papers (2024-07-12T18:00:06Z) - Unified framework for efficiently computable quantum circuits [0.0]
Quantum circuits consisting of Clifford and matchgates are two classes of circuits that are known to be efficiently simulatable on a classical computer.
We introduce a unified framework that shows in a transparent way the special structure that allows these circuits can be efficiently simulatable.
arXiv Detail & Related papers (2024-01-16T08:04:28Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - Quantum correlation functions through tensor network path integral [0.0]
tensor networks are utilized for calculating equilibrium correlation function for open quantum systems.
The influence of the solvent on the quantum system is incorporated through an influence functional.
The design and implementation of this method is discussed along with illustrations from rate theory, symmetrized spin correlation functions, dynamical susceptibility calculations and quantum thermodynamics.
arXiv Detail & Related papers (2023-08-21T07:46:51Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Statistically Meaningful Approximation: a Case Study on Approximating
Turing Machines with Transformers [50.85524803885483]
This work proposes a formal definition of statistically meaningful (SM) approximation which requires the approximating network to exhibit good statistical learnability.
We study SM approximation for two function classes: circuits and Turing machines.
arXiv Detail & Related papers (2021-07-28T04:28:55Z) - A Pairwise Connected Tensor Network Representation of Path Integrals [0.0]
It has been recently shown how the tensorial nature of real-time path integrals involving the Feynman-Vernon influence functional can be utilized.
Here, a generalized tensor network is derived and implemented specifically incorporating the pairwise interaction structure of the influence functional.
This pairwise connected tensor network path integral (PCTNPI) is illustrated through applications to typical spin-boson problems and explorations of the differences caused by the exact form of the spectral density.
arXiv Detail & Related papers (2021-06-28T18:30:17Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - A Functional Perspective on Learning Symmetric Functions with Neural
Networks [48.80300074254758]
We study the learning and representation of neural networks defined on measures.
We establish approximation and generalization bounds under different choices of regularization.
The resulting models can be learned efficiently and enjoy generalization guarantees that extend across input sizes.
arXiv Detail & Related papers (2020-08-16T16:34:33Z) - Continuous-in-Depth Neural Networks [107.47887213490134]
We first show that ResNets fail to be meaningful dynamical in this richer sense.
We then demonstrate that neural network models can learn to represent continuous dynamical systems.
We introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures.
arXiv Detail & Related papers (2020-08-05T22:54:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.