Neural network approximation and estimation of classifiers with
classification boundary in a Barron class
- URL: http://arxiv.org/abs/2011.09363v2
- Date: Thu, 10 Mar 2022 16:32:53 GMT
- Title: Neural network approximation and estimation of classifiers with
classification boundary in a Barron class
- Authors: Andrei Caragea, Philipp Petersen, Felix Voigtlaender
- Abstract summary: We prove bounds for the approximation and estimation of certain binary classification functions using ReLU neural networks.
Our estimation bounds provide a priori performance guarantees for empirical risk using networks of a suitable size.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We prove bounds for the approximation and estimation of certain binary
classification functions using ReLU neural networks. Our estimation bounds
provide a priori performance guarantees for empirical risk minimization using
networks of a suitable size, depending on the number of training samples
available. The obtained approximation and estimation rates are independent of
the dimension of the input, showing that the curse of dimensionality can be
overcome in this setting; in fact, the input dimension only enters in the form
of a polynomial factor. Regarding the regularity of the target classification
function, we assume the interfaces between the different classes to be locally
of Barron-type. We complement our results by studying the relations between
various Barron-type spaces that have been proposed in the literature. These
spaces differ substantially more from each other than the current literature
suggests.
Related papers
- Assouad, Fano, and Le Cam with Interaction: A Unifying Lower Bound Framework and Characterization for Bandit Learnability [71.82666334363174]
We develop a unified framework for lower bound methods in statistical estimation and interactive decision making.
We introduce a novel measure, decision dimension, which facilitates the complexity of new lower bounds for interactive decision making.
arXiv Detail & Related papers (2024-10-07T15:14:58Z) - Dimension-independent learning rates for high-dimensional classification
problems [53.622581586464634]
We show that every $RBV2$ function can be approximated by a neural network with bounded weights.
We then prove the existence of a neural network with bounded weights approximating a classification function.
arXiv Detail & Related papers (2024-09-26T16:02:13Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - An Upper Bound for the Distribution Overlap Index and Its Applications [18.481370450591317]
This paper proposes an easy-to-compute upper bound for the overlap index between two probability distributions.
The proposed bound shows its value in one-class classification and domain shift analysis.
Our work shows significant promise toward broadening the applications of overlap-based metrics.
arXiv Detail & Related papers (2022-12-16T20:02:03Z) - Robust-by-Design Classification via Unitary-Gradient Neural Networks [66.17379946402859]
The use of neural networks in safety-critical systems requires safe and robust models, due to the existence of adversarial attacks.
Knowing the minimal adversarial perturbation of any input x, or, equivalently, the distance of x from the classification boundary, allows evaluating the classification robustness, providing certifiable predictions.
A novel network architecture named Unitary-Gradient Neural Network is presented.
Experimental results show that the proposed architecture approximates a signed distance, hence allowing an online certifiable classification of x at the cost of a single inference.
arXiv Detail & Related papers (2022-09-09T13:34:51Z) - Optimal learning of high-dimensional classification problems using deep
neural networks [0.0]
We study the problem of learning classification functions from noiseless training samples, under the assumption that the decision boundary is of a certain regularity.
For the class of locally Barron-regular decision boundaries, we find that the optimal estimation rates are essentially independent of the underlying dimension.
arXiv Detail & Related papers (2021-12-23T14:15:10Z) - Sobolev-type embeddings for neural network approximation spaces [5.863264019032882]
We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated.
We prove embedding theorems between these spaces for different values of $p$.
We find that, analogous to the case of classical function spaces, it is possible to trade "smoothness" (i.e., approximation rate) for increased integrability.
arXiv Detail & Related papers (2021-10-28T17:11:38Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Conditional Variational Capsule Network for Open Set Recognition [64.18600886936557]
In open set recognition, a classifier has to detect unknown classes that are not known at training time.
Recently proposed Capsule Networks have shown to outperform alternatives in many fields, particularly in image recognition.
In our proposal, during training, capsules features of the same known class are encouraged to match a pre-defined gaussian, one for each class.
arXiv Detail & Related papers (2021-04-19T09:39:30Z) - Reduced Dilation-Erosion Perceptron for Binary Classification [1.3706331473063877]
Dilation-erosion perceptron (DEP) is a neural network obtained by a convex combination of a dilation and an erosion.
This paper introduces the reduced dilation-erosion (r-DEP) classifier.
arXiv Detail & Related papers (2020-03-04T19:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.