Approximation with Tensor Networks. Part I: Approximation Spaces
- URL: http://arxiv.org/abs/2007.00118v3
- Date: Fri, 5 Feb 2021 14:38:21 GMT
- Title: Approximation with Tensor Networks. Part I: Approximation Spaces
- Authors: Mazen Ali and Anthony Nouy
- Abstract summary: We study the approximation of functions by tensor networks (TNs)
We show that Lebesgue $Lp$-spaces in one dimension can be identified with tensor product spaces of arbitrary order through tensorization.
We show that functions in these approximation classes do not possess any Besov smoothness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the approximation of functions by tensor networks (TNs). We show
that Lebesgue $L^p$-spaces in one dimension can be identified with tensor
product spaces of arbitrary order through tensorization. We use this tensor
product structure to define subsets of $L^p$ of rank-structured functions of
finite representation complexity. These subsets are then used to define
different approximation classes of tensor networks, associated with different
measures of complexity. These approximation classes are shown to be
quasi-normed linear spaces. We study some elementary properties and
relationships of said spaces. In part II of this work, we will show that
classical smoothness (Besov) spaces are continuously embedded into these
approximation classes. We will also show that functions in these approximation
classes do not possess any Besov smoothness, unless one restricts the depth of
the tensor networks. The results of this work are both an analysis of the
approximation spaces of TNs and a study of the expressivity of a particular
type of neural networks (NN) -- namely feed-forward sum-product networks with
sparse architecture. The input variables of this network result from the
tensorization step, interpreted as a particular featuring step which can also
be implemented with a neural network with a specific architecture. We point out
interesting parallels to recent results on the expressivity of rectified linear
unit (ReLU) networks -- currently one of the most popular type of NNs.
Related papers
- Tangent Bundle Convolutional Learning: from Manifolds to Cellular Sheaves and Back [84.61160272624262]
We define tangent bundle filters and tangent bundle neural networks (TNNs) based on this convolution operation.
Tangent bundle filters admit a spectral representation that generalizes the ones of scalar manifold filters, graph filters and standard convolutional filters in continuous time.
We numerically evaluate the effectiveness of the proposed architecture on various learning tasks.
arXiv Detail & Related papers (2023-03-20T17:57:15Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - A singular Riemannian geometry approach to Deep Neural Networks II.
Reconstruction of 1-D equivalence classes [78.120734120667]
We build the preimage of a point in the output manifold in the input space.
We focus for simplicity on the case of neural networks maps from n-dimensional real spaces to (n - 1)-dimensional real spaces.
arXiv Detail & Related papers (2021-12-17T11:47:45Z) - Sobolev-type embeddings for neural network approximation spaces [5.863264019032882]
We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated.
We prove embedding theorems between these spaces for different values of $p$.
We find that, analogous to the case of classical function spaces, it is possible to trade "smoothness" (i.e., approximation rate) for increased integrability.
arXiv Detail & Related papers (2021-10-28T17:11:38Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Lower and Upper Bounds on the VC-Dimension of Tensor Network Models [8.997952791113232]
Network methods have been a key ingredient of advances in condensed matter physics.
They can be used to efficiently learn linear models in exponentially large feature spaces.
In this work, we derive upper and lower bounds on the VC dimension and pseudo-dimension of a large class of tensor network models.
arXiv Detail & Related papers (2021-06-22T14:39:25Z) - Approximation Theory of Tree Tensor Networks: Tensorized Multivariate Functions [0.0]
We show that TNs can (near to) optimally replicate $h$-uniform and $h$-adaptive approximation, for any smoothness order of the target function.
TNs have the capacity to (near to) optimally approximate many function classes -- without being adapted to the particular class in question.
arXiv Detail & Related papers (2021-01-28T11:09:40Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Approximation with Tensor Networks. Part II: Approximation Rates for
Smoothness Classes [0.0]
We study the approximation by tensor networks (TNs) of functions from smoothness classes.
The resulting tool can be interpreted as a feed-forward neural network.
We show that arbitrary Besov functions can be approximated with optimal or near to optimal rate.
arXiv Detail & Related papers (2020-06-30T21:57:42Z) - Optimization at the boundary of the tensor network variety [2.1839191255085995]
tensor network states form a variational ansatz class widely used in the study of quantum many-body systems.
Recent work has shown that states on the boundary of this variety can yield more efficient representations for states of physical interest.
We show how to optimize over this class in order to find ground states of local Hamiltonians.
arXiv Detail & Related papers (2020-06-30T16:58:55Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.