Approximation with Tensor Networks. Part II: Approximation Rates for
Smoothness Classes
- URL: http://arxiv.org/abs/2007.00128v3
- Date: Fri, 5 Feb 2021 14:41:03 GMT
- Title: Approximation with Tensor Networks. Part II: Approximation Rates for
Smoothness Classes
- Authors: Mazen Ali and Anthony Nouy
- Abstract summary: We study the approximation by tensor networks (TNs) of functions from smoothness classes.
The resulting tool can be interpreted as a feed-forward neural network.
We show that arbitrary Besov functions can be approximated with optimal or near to optimal rate.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the approximation by tensor networks (TNs) of functions from
classical smoothness classes. The considered approximation tool combines a
tensorization of functions in $L^p([0,1))$, which allows to identify a
univariate function with a multivariate function (or tensor), and the use of
tree tensor networks (the tensor train format) for exploiting low-rank
structures of multivariate functions. The resulting tool can be interpreted as
a feed-forward neural network, with first layers implementing the
tensorization, interpreted as a particular featuring step, followed by a
sum-product network with sparse architecture. In part I of this work, we
presented several approximation classes associated with different measures of
complexity of tensor networks and studied their properties. In this work (part
II), we show how classical approximation tools, such as polynomials or splines
(with fixed or free knots), can be encoded as a tensor network with controlled
complexity. We use this to derive direct (Jackson) inequalities for the
approximation spaces of tensor networks. This is then utilized to show that
Besov spaces are continuously embedded into these approximation spaces. In
other words, we show that arbitrary Besov functions can be approximated with
optimal or near to optimal rate. We also show that an arbitrary function in the
approximation class possesses no Besov smoothness, unless one limits the depth
of the tensor network.
Related papers
- Compressing multivariate functions with tree tensor networks [0.0]
One-dimensional tensor networks are increasingly being used as a numerical ansatz for continuum functions.
We show how more structured tree tensor networks offer a significantly more efficient ansatz than the commonly used tensor train.
arXiv Detail & Related papers (2024-10-04T16:20:52Z) - TensorKrowch: Smooth integration of tensor networks in machine learning [46.0920431279359]
We introduceKrowch, an open source Python library built on top of PyTorch.
Krowch allows users to construct any tensor network, train it, and integrate it as a layer in more intricate deep learning models.
arXiv Detail & Related papers (2023-06-14T15:55:19Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Tensor networks in machine learning [0.0]
A tensor network is a decomposition used to express and approximate large arrays of data.
A merger of tensor networks with machine learning is natural.
Herein the network parameters are adjusted to learn or classify a data-set.
arXiv Detail & Related papers (2022-07-06T18:00:00Z) - Benefits of Overparameterized Convolutional Residual Networks: Function
Approximation under Smoothness Constraint [48.25573695787407]
We prove that large ConvResNets can not only approximate a target function in terms of function value, but also exhibit sufficient first-order smoothness.
Our theory partially justifies the benefits of using deep and wide networks in practice.
arXiv Detail & Related papers (2022-06-09T15:35:22Z) - Sobolev-type embeddings for neural network approximation spaces [5.863264019032882]
We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated.
We prove embedding theorems between these spaces for different values of $p$.
We find that, analogous to the case of classical function spaces, it is possible to trade "smoothness" (i.e., approximation rate) for increased integrability.
arXiv Detail & Related papers (2021-10-28T17:11:38Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Approximation Theory of Tree Tensor Networks: Tensorized Multivariate Functions [0.0]
We show that TNs can (near to) optimally replicate $h$-uniform and $h$-adaptive approximation, for any smoothness order of the target function.
TNs have the capacity to (near to) optimally approximate many function classes -- without being adapted to the particular class in question.
arXiv Detail & Related papers (2021-01-28T11:09:40Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Approximation with Tensor Networks. Part I: Approximation Spaces [0.0]
We study the approximation of functions by tensor networks (TNs)
We show that Lebesgue $Lp$-spaces in one dimension can be identified with tensor product spaces of arbitrary order through tensorization.
We show that functions in these approximation classes do not possess any Besov smoothness.
arXiv Detail & Related papers (2020-06-30T21:32:59Z) - Embedding Propagation: Smoother Manifold for Few-Shot Classification [131.81692677836202]
We propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification.
We empirically show that embedding propagation yields a smoother embedding manifold.
We show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16% points.
arXiv Detail & Related papers (2020-03-09T13:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.