Improving correlation method with convolutional neural networks
- URL: http://arxiv.org/abs/2004.09430v1
- Date: Mon, 20 Apr 2020 16:36:01 GMT
- Title: Improving correlation method with convolutional neural networks
- Authors: Dmitriy Goncharov and Rostislav Starikov
- Abstract summary: We present a convolutional neural network for the classification of correlation responses obtained by correlation filters.
The proposed approach can improve the accuracy of classification, as well as achieve invariance to the image classes and parameters.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a convolutional neural network for the classification of
correlation responses obtained by correlation filters. The proposed approach
can improve the accuracy of classification, as well as achieve invariance to
the image classes and parameters.
Related papers
- Canonical Correlation Guided Deep Neural Network [14.188285111418516]
We present a canonical correlation guided learning framework, which allows to be realized by deep neural networks (CCDNN)
In the proposed method, the optimization formulation is not restricted to maximize correlation, instead we make canonical correlation as a constraint.
To reduce the redundancy induced by correlation, a redundancy filter is designed.
arXiv Detail & Related papers (2024-09-28T16:08:44Z) - Analysis of the rate of convergence of an over-parametrized convolutional neural network image classifier learned by gradient descent [9.4491536689161]
Image classification based on over-parametrized convolutional neural networks with a global average-pooling layer is considered.
A gradient bound on the rate of convergence of the difference between the misclassification risk of the newly introduced convolutional neural network estimate is derived.
arXiv Detail & Related papers (2024-05-13T10:26:28Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Reparameterization through Spatial Gradient Scaling [69.27487006953852]
Reparameterization aims to improve the generalization of deep neural networks by transforming convolutional layers into equivalent multi-branched structures during training.
We present a novel spatial gradient scaling method to redistribute learning focus among weights in convolutional networks.
arXiv Detail & Related papers (2023-03-05T17:57:33Z) - Saliency Map Based Data Augmentation [0.0]
We will present a new method which uses saliency maps to restrict the invariance of neural networks to certain regions.
This method provides higher test accuracy in classification tasks.
arXiv Detail & Related papers (2022-05-29T15:04:59Z) - Decomposing neural networks as mappings of correlation functions [57.52754806616669]
We study the mapping between probability distributions implemented by a deep feed-forward network.
We identify essential statistics in the data, as well as different information representations that can be used by neural networks.
arXiv Detail & Related papers (2022-02-10T09:30:31Z) - Approximation bounds for norm constrained neural networks with
applications to regression and GANs [9.645327615996914]
We prove upper and lower bounds on the approximation error of ReLU neural networks with norm constraint on the weights.
We apply these approximation bounds to analyze the convergences of regression using norm constrained neural networks and distribution estimation by GANs.
arXiv Detail & Related papers (2022-01-24T02:19:05Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z) - On the rate of convergence of image classifiers based on convolutional
neural networks [0.0]
The rate of convergence of the misclassification risk of the estimates towards the optimal misclassification risk is analyzed.
This proves that in image classification it is possible to circumvent the curse of dimensionality by convolutional neural networks.
arXiv Detail & Related papers (2020-03-03T14:24:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.