On the rate of convergence of image classifiers based on convolutional
neural networks
- URL: http://arxiv.org/abs/2003.01526v3
- Date: Wed, 14 Oct 2020 18:20:55 GMT
- Title: On the rate of convergence of image classifiers based on convolutional
neural networks
- Authors: M. Kohler, A. Krzyzak and B. Walter
- Abstract summary: The rate of convergence of the misclassification risk of the estimates towards the optimal misclassification risk is analyzed.
This proves that in image classification it is possible to circumvent the curse of dimensionality by convolutional neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image classifiers based on convolutional neural networks are defined, and the
rate of convergence of the misclassification risk of the estimates towards the
optimal misclassification risk is analyzed. Under suitable assumptions on the
smoothness and structure of the aposteriori probability a rate of convergence
is shown which is independent of the dimension of the image. This proves that
in image classification it is possible to circumvent the curse of
dimensionality by convolutional neural networks.
Related papers
- Analysis of the rate of convergence of an over-parametrized convolutional neural network image classifier learned by gradient descent [9.4491536689161]
Image classification based on over-parametrized convolutional neural networks with a global average-pooling layer is considered.
A gradient bound on the rate of convergence of the difference between the misclassification risk of the newly introduced convolutional neural network estimate is derived.
arXiv Detail & Related papers (2024-05-13T10:26:28Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - On Excess Risk Convergence Rates of Neural Network Classifiers [8.329456268842227]
We study the performance of plug-in classifiers based on neural networks in a binary classification setting as measured by their excess risks.
We analyze the estimation and approximation properties of neural networks to obtain a dimension-free, uniform rate of convergence.
arXiv Detail & Related papers (2023-09-26T17:14:10Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - An out-of-distribution discriminator based on Bayesian neural network
epistemic uncertainty [0.19573380763700712]
Bayesian neural networks (BNNs) are an important type of neural network with built-in capability for quantifying uncertainty.
This paper discusses aleatoric and epistemic uncertainty in BNNs and how they can be calculated.
arXiv Detail & Related papers (2022-10-18T21:15:33Z) - Analysis of convolutional neural network image classifiers in a
rotationally symmetric model [4.56877715768796]
The rate of convergence of the misclassification risk of the estimates towards the optimal misclassification risk is analyzed.
It is shown that least squares plug-in classifiers based on convolutional neural networks are able to circumvent the curse of dimensionality in binary image classification.
arXiv Detail & Related papers (2022-05-11T13:43:13Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Approximation bounds for norm constrained neural networks with
applications to regression and GANs [9.645327615996914]
We prove upper and lower bounds on the approximation error of ReLU neural networks with norm constraint on the weights.
We apply these approximation bounds to analyze the convergences of regression using norm constrained neural networks and distribution estimation by GANs.
arXiv Detail & Related papers (2022-01-24T02:19:05Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Scene Uncertainty and the Wellington Posterior of Deterministic Image
Classifiers [68.9065881270224]
We introduce the Wellington Posterior, which is the distribution of outcomes that would have been obtained in response to data that could have been generated by the same scene.
We explore the use of data augmentation, dropout, ensembling, single-view reconstruction, and model linearization to compute a Wellington Posterior.
Additional methods include the use of conditional generative models such as generative adversarial networks, neural radiance fields, and conditional prior networks.
arXiv Detail & Related papers (2021-06-25T20:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.