Gabor Convolutional Networks
- URL: http://arxiv.org/abs/1705.01450v4
- Date: Wed, 29 Mar 2023 03:27:18 GMT
- Title: Gabor Convolutional Networks
- Authors: Shangzhen Luan, Baochang Zhang, Chen Chen, Xianbin Cao, Jungong Han,
Jianzhuang Liu
- Abstract summary: We propose a new deep model, termed Gabor Convolutional Networks (GCNs), which incorporates Gabor filters into deep convolutional neural networks (DCNNs)
GCNs can be easily implemented and are compatible with any popular deep learning architecture.
Experimental results demonstrate the super capability of our algorithm in recognizing objects, where the scale and rotation changes occur frequently.
- Score: 103.87356592690669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Steerable properties dominate the design of traditional filters, e.g., Gabor
filters, and endow features the capability of dealing with spatial
transformations. However, such excellent properties have not been well explored
in the popular deep convolutional neural networks (DCNNs). In this paper, we
propose a new deep model, termed Gabor Convolutional Networks (GCNs or Gabor
CNNs), which incorporates Gabor filters into DCNNs to enhance the resistance of
deep learned features to the orientation and scale changes. By only
manipulating the basic element of DCNNs based on Gabor filters, i.e., the
convolution operator, GCNs can be easily implemented and are compatible with
any popular deep learning architecture. Experimental results demonstrate the
super capability of our algorithm in recognizing objects, where the scale and
rotation changes occur frequently. The proposed GCNs have much fewer learnable
network parameters, and thus is easier to train with an end-to-end pipeline.
Related papers
- Efficient Higher-order Convolution for Small Kernels in Deep Learning [0.0]
We propose a novel method to perform higher-order Volterra filtering with lower memory and computational costs.
Based on the proposed method, a new attention module called Higher-order Local Attention Block (HLA) is proposed and tested.
arXiv Detail & Related papers (2024-04-25T07:42:48Z) - Gabor is Enough: Interpretable Deep Denoising with a Gabor Synthesis
Dictionary Prior [6.297103076360578]
Gabor-like filters have been observed in the early layers of CNN classifiers and throughout low-level image processing networks.
In this work, we take this observation to the extreme and explicitly constrain the filters of a natural-image denoising CNN to be learned 2D real Gabor filters.
We find that the proposed network (GDLNet) can achieve near state-of-the-art denoising performance amongst popular fully convolutional neural networks.
arXiv Detail & Related papers (2022-04-23T22:21:54Z) - Understanding the Basis of Graph Convolutional Neural Networks via an
Intuitive Matched Filtering Approach [7.826806223782053]
Graph Convolutional Neural Networks (GCNN) are becoming a preferred model for data processing on irregular domains.
We show that their convolution layers effectively perform matched filtering of input data with the chosen patterns.
A numerical example guides the reader through the various steps of GCNN operation and learning both visually and numerically.
arXiv Detail & Related papers (2021-08-23T12:41:06Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Orientation Convolutional Networks for Image Recognition [23.41238299693874]
We develop Orientation Convolution Networks (OCNs) for image recognition based on the proposed Landmark Gabor Filters (LGFs)
OCNs can be compatible with any existing deep learning networks.
arXiv Detail & Related papers (2021-02-02T14:49:40Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Learning Sparse Filters in Deep Convolutional Neural Networks with a
l1/l2 Pseudo-Norm [5.3791844634527495]
Deep neural networks (DNNs) have proven to be efficient for numerous tasks, but come at a high memory and computation cost.
Recent research has shown that their structure can be more compact without compromising their performance.
We present a sparsity-inducing regularization term based on the ratio l1/l2 pseudo-norm defined on the filter coefficients.
arXiv Detail & Related papers (2020-07-20T11:56:12Z) - Gradient Centralization: A New Optimization Technique for Deep Neural
Networks [74.935141515523]
gradient centralization (GC) operates directly on gradients by centralizing the gradient vectors to have zero mean.
GC can be viewed as a projected gradient descent method with a constrained loss function.
GC is very simple to implement and can be easily embedded into existing gradient based DNNs with only one line of code.
arXiv Detail & Related papers (2020-04-03T10:25:00Z) - LightGCN: Simplifying and Powering Graph Convolution Network for
Recommendation [100.76229017056181]
Graph Convolution Network (GCN) has become new state-of-the-art for collaborative filtering.
In this work, we aim to simplify the design of GCN to make it more concise and appropriate for recommendation.
We propose a new model named LightGCN, including only the most essential component in GCN -- neighborhood aggregation.
arXiv Detail & Related papers (2020-02-06T06:53:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.