Deep Neural Networks are Adaptive to Function Regularity and Data Distribution in Approximation and Estimation
- URL: http://arxiv.org/abs/2406.05320v1
- Date: Sat, 8 Jun 2024 02:01:50 GMT
- Title: Deep Neural Networks are Adaptive to Function Regularity and Data Distribution in Approximation and Estimation
- Authors: Hao Liu, Jiahui Cheng, Wenjing Liao,
- Abstract summary: We study how deep neural networks can adapt to different regularity in functions across different locations and scales.
Our results show that deep neural networks are adaptive to different regularity of functions and nonuniform data distributions.
- Score: 8.284464143581546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has exhibited remarkable results across diverse areas. To understand its success, substantial research has been directed towards its theoretical foundations. Nevertheless, the majority of these studies examine how well deep neural networks can model functions with uniform regularity. In this paper, we explore a different angle: how deep neural networks can adapt to different regularity in functions across different locations and scales and nonuniform data distributions. More precisely, we focus on a broad class of functions defined by nonlinear tree-based approximation. This class encompasses a range of function types, such as functions with uniform regularity and discontinuous functions. We develop nonparametric approximation and estimation theories for this function class using deep ReLU networks. Our results show that deep neural networks are adaptive to different regularity of functions and nonuniform data distributions at different locations and scales. We apply our results to several function classes, and derive the corresponding approximation and generalization errors. The validity of our results is demonstrated through numerical experiments.
Related papers
- Dimension-independent learning rates for high-dimensional classification
problems [53.622581586464634]
We show that every $RBV2$ function can be approximated by a neural network with bounded weights.
We then prove the existence of a neural network with bounded weights approximating a classification function.
arXiv Detail & Related papers (2024-09-26T16:02:13Z) - SGD method for entropy error function with smoothing l0 regularization for neural networks [3.108634881604788]
entropy error function has been widely used in neural networks.
We propose a novel entropy function with smoothing l0 regularization for feed-forward neural networks.
Our work is novel as it enables neural networks to learn effectively, producing more accurate predictions.
arXiv Detail & Related papers (2024-05-28T19:54:26Z) - Going Beyond Neural Network Feature Similarity: The Network Feature
Complexity and Its Interpretation Using Category Theory [64.06519549649495]
We provide the definition of what we call functionally equivalent features.
These features produce equivalent output under certain transformations.
We propose an efficient algorithm named Iterative Feature Merging.
arXiv Detail & Related papers (2023-10-10T16:27:12Z) - Mean-field neural networks: learning mappings on Wasserstein space [0.0]
We study the machine learning task for models with operators mapping between the Wasserstein space of probability measures and a space of functions.
Two classes of neural networks are proposed to learn so-called mean-field functions.
We present different algorithms relying on mean-field neural networks for solving time-dependent mean-field problems.
arXiv Detail & Related papers (2022-10-27T05:11:42Z) - Multigoal-oriented dual-weighted-residual error estimation using deep
neural networks [0.0]
Deep learning is considered as a powerful tool with high flexibility to approximate functions.
Our approach is based on a posteriori error estimation in which the adjoint problem is solved for the error localization.
An efficient and easy to implement algorithm is developed to obtain a posteriori error estimate for multiple goal functionals.
arXiv Detail & Related papers (2021-12-21T16:59:44Z) - Going Beyond Linear RL: Sample Efficient Neural Function Approximation [76.57464214864756]
We study function approximation with two-layer neural networks.
Our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
arXiv Detail & Related papers (2021-07-14T03:03:56Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - The Connection Between Approximation, Depth Separation and Learnability
in Neural Networks [70.55686685872008]
We study the connection between learnability and approximation capacity.
We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target.
arXiv Detail & Related papers (2021-01-31T11:32:30Z) - Neural Network Approximations of Compositional Functions With
Applications to Dynamical Systems [3.660098145214465]
We develop an approximation theory for compositional functions and their neural network approximations.
We identify a set of key features of compositional functions and the relationship between the features and the complexity of neural networks.
In addition to function approximations, we prove several formulae of error upper bounds for neural networks.
arXiv Detail & Related papers (2020-12-03T04:40:25Z) - A Functional Perspective on Learning Symmetric Functions with Neural
Networks [48.80300074254758]
We study the learning and representation of neural networks defined on measures.
We establish approximation and generalization bounds under different choices of regularization.
The resulting models can be learned efficiently and enjoy generalization guarantees that extend across input sizes.
arXiv Detail & Related papers (2020-08-16T16:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.