Orientation Convolutional Networks for Image Recognition
- URL: http://arxiv.org/abs/2102.01523v1
- Date: Tue, 2 Feb 2021 14:49:40 GMT
- Title: Orientation Convolutional Networks for Image Recognition
- Authors: Yalan Qin, Guorui Feng, Hanzhou Wu, Yanli Ren and Xinpeng Zhang
- Abstract summary: We develop Orientation Convolution Networks (OCNs) for image recognition based on the proposed Landmark Gabor Filters (LGFs)
OCNs can be compatible with any existing deep learning networks.
- Score: 23.41238299693874
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Convolutional Neural Networks (DCNNs) are capable of obtaining powerful
image representations, which have attracted great attentions in image
recognition. However, they are limited in modeling orientation transformation
by the internal mechanism. In this paper, we develop Orientation Convolution
Networks (OCNs) for image recognition based on the proposed Landmark Gabor
Filters (LGFs) that the robustness of the learned representation against
changed of orientation can be enhanced. By modulating the convolutional filter
with LGFs, OCNs can be compatible with any existing deep learning networks.
LGFs act as a Gabor filter bank achieved by selecting $ p $ $ \left( \ll
n\right) $ representative Gabor filters as andmarks and express the original
Gabor filters as sparse linear combinations of these landmarks. Specifically,
based on a matrix factorization framework, a flexible integration for the local
and the global structure of original Gabor filters by sparsity and low-rank
constraints is utilized. With the propogation of the low-rank structure, the
corresponding sparsity for representation of original Gabor filter bank can be
significantly promoted. Experimental results over several benchmarks
demonstrate that our method is less sensitive to the orientation and produce
higher performance both in accuracy and cost, compared with the existing
state-of-art methods. Besides, our OCNs have few parameters to learn and can
significantly reduce the complexity of training network.
Related papers
- ASWT-SGNN: Adaptive Spectral Wavelet Transform-based Self-Supervised
Graph Neural Network [20.924559944655392]
This paper proposes an Adaptive Spectral Wavelet Transform-based Self-Supervised Graph Neural Network (ASWT-SGNN)
ASWT-SGNN accurately approximates the filter function in high-density spectral regions, avoiding costly eigen-decomposition.
It achieves comparable performance to state-of-the-art models in node classification tasks.
arXiv Detail & Related papers (2023-12-10T03:07:42Z) - In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - Mechanism of feature learning in convolutional neural networks [14.612673151889615]
We identify the mechanism of how convolutional neural networks learn from image data.
We present empirical evidence for our ansatz, including identifying high correlation between covariances of filters and patch-based AGOPs.
We then demonstrate the generality of our result by using the patch-based AGOP to enable deep feature learning in convolutional kernel machines.
arXiv Detail & Related papers (2023-09-01T16:30:02Z) - Gabor is Enough: Interpretable Deep Denoising with a Gabor Synthesis
Dictionary Prior [6.297103076360578]
Gabor-like filters have been observed in the early layers of CNN classifiers and throughout low-level image processing networks.
In this work, we take this observation to the extreme and explicitly constrain the filters of a natural-image denoising CNN to be learned 2D real Gabor filters.
We find that the proposed network (GDLNet) can achieve near state-of-the-art denoising performance amongst popular fully convolutional neural networks.
arXiv Detail & Related papers (2022-04-23T22:21:54Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Self-grouping Convolutional Neural Networks [30.732298624941738]
We propose a novel method of designing self-grouping convolutional neural networks, called SG-CNN.
For each filter, we first evaluate the importance value of their input channels to identify the importance vectors.
Using the resulting emphdata-dependent centroids, we prune the less important connections, which implicitly minimizes the accuracy loss of the pruning.
arXiv Detail & Related papers (2020-09-29T06:24:32Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z) - Gradient Centralization: A New Optimization Technique for Deep Neural
Networks [74.935141515523]
gradient centralization (GC) operates directly on gradients by centralizing the gradient vectors to have zero mean.
GC can be viewed as a projected gradient descent method with a constrained loss function.
GC is very simple to implement and can be easily embedded into existing gradient based DNNs with only one line of code.
arXiv Detail & Related papers (2020-04-03T10:25:00Z) - Computational optimization of convolutional neural networks using
separated filters architecture [69.73393478582027]
We consider a convolutional neural network transformation that reduces computation complexity and thus speedups neural network processing.
Use of convolutional neural networks (CNN) is the standard approach to image recognition despite the fact they can be too computationally demanding.
arXiv Detail & Related papers (2020-02-18T17:42:13Z) - Gabor Convolutional Networks [103.87356592690669]
We propose a new deep model, termed Gabor Convolutional Networks (GCNs), which incorporates Gabor filters into deep convolutional neural networks (DCNNs)
GCNs can be easily implemented and are compatible with any popular deep learning architecture.
Experimental results demonstrate the super capability of our algorithm in recognizing objects, where the scale and rotation changes occur frequently.
arXiv Detail & Related papers (2017-05-03T14:37:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.