Analysis of convolutional neural network image classifiers in a
rotationally symmetric model
- URL: http://arxiv.org/abs/2205.05500v1
- Date: Wed, 11 May 2022 13:43:13 GMT
- Title: Analysis of convolutional neural network image classifiers in a
rotationally symmetric model
- Authors: Michael Kohler and Benjamin Walter
- Abstract summary: The rate of convergence of the misclassification risk of the estimates towards the optimal misclassification risk is analyzed.
It is shown that least squares plug-in classifiers based on convolutional neural networks are able to circumvent the curse of dimensionality in binary image classification.
- Score: 4.56877715768796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural network image classifiers are defined and the rate of
convergence of the misclassification risk of the estimates towards the optimal
misclassification risk is analyzed. Here we consider images as random variables
with values in some functional space, where we only observe discrete samples as
function values on some finite grid. Under suitable structural and smoothness
assumptions on the functional a posteriori probability, which includes some
kind of symmetry against rotation of subparts of the input image, it is shown
that least squares plug-in classifiers based on convolutional neural networks
are able to circumvent the curse of dimensionality in binary image
classification if we neglect a resolution-dependent error term. The finite
sample size behavior of the classifier is analyzed by applying it to simulated
and real data.
Related papers
- On Excess Risk Convergence Rates of Neural Network Classifiers [8.329456268842227]
We study the performance of plug-in classifiers based on neural networks in a binary classification setting as measured by their excess risks.
We analyze the estimation and approximation properties of neural networks to obtain a dimension-free, uniform rate of convergence.
arXiv Detail & Related papers (2023-09-26T17:14:10Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Analysis of convolutional neural network image classifiers in a
hierarchical max-pooling model with additional local pooling [0.0]
Image classification is considered, and a hierarchical max-pooling model with additional local pooling is introduced.
The additional local pooling enables the hierachical model to combine parts of the image which have a variable relative distance towards each other.
arXiv Detail & Related papers (2021-05-31T16:08:00Z) - Anomaly Detection in Image Datasets Using Convolutional Neural Networks,
Center Loss, and Mahalanobis Distance [0.0]
User activities generate a significant number of poor-quality or irrelevant images and data vectors.
For neural networks, the anomalous is usually defined as out-of-distribution samples.
This work proposes methods for supervised and semi-supervised detection of out-of-distribution samples in image datasets.
arXiv Detail & Related papers (2021-04-13T13:44:03Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Efficient detection of adversarial images [2.6249027950824506]
Some or all pixel values of an image are modified by an external attacker, so that the change is almost invisible to the human eye.
This paper first proposes a novel pre-processing technique that facilitates the detection of such modified images.
An adaptive version of this algorithm is proposed where a random number of perturbations are chosen adaptively.
arXiv Detail & Related papers (2020-07-09T05:35:49Z) - Set Based Stochastic Subsampling [85.5331107565578]
We propose a set-based two-stage end-to-end neural subsampling model that is jointly optimized with an textitarbitrary downstream task network.
We show that it outperforms the relevant baselines under low subsampling rates on a variety of tasks including image classification, image reconstruction, function reconstruction and few-shot classification.
arXiv Detail & Related papers (2020-06-25T07:36:47Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z) - On the rate of convergence of image classifiers based on convolutional
neural networks [0.0]
The rate of convergence of the misclassification risk of the estimates towards the optimal misclassification risk is analyzed.
This proves that in image classification it is possible to circumvent the curse of dimensionality by convolutional neural networks.
arXiv Detail & Related papers (2020-03-03T14:24:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.