CyCNN: A Rotation Invariant CNN using Polar Mapping and Cylindrical
Convolution Layers
- URL: http://arxiv.org/abs/2007.10588v1
- Date: Tue, 21 Jul 2020 04:05:35 GMT
- Title: CyCNN: A Rotation Invariant CNN using Polar Mapping and Cylindrical
Convolution Layers
- Authors: Jinpyo Kim, Wooekun Jung, Hyungmo Kim, Jaejin Lee
- Abstract summary: This paper proposes a deep CNN model, called CyCNN, which exploits polar mapping of input images to convert rotation to translation.
A CyConv layer exploits the cylindrically sliding windows (CSW) mechanism that vertically extends the input-image receptive fields of boundary units in a convolutional layer.
We show that if there is no data augmentation during training, CyCNN significantly improves classification accuracies when compared to conventional CNN models.
- Score: 2.4316550366482357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Convolutional Neural Networks (CNNs) are empirically known to be
invariant to moderate translation but not to rotation in image classification.
This paper proposes a deep CNN model, called CyCNN, which exploits polar
mapping of input images to convert rotation to translation. To deal with the
cylindrical property of the polar coordinates, we replace convolution layers in
conventional CNNs to cylindrical convolutional (CyConv) layers. A CyConv layer
exploits the cylindrically sliding windows (CSW) mechanism that vertically
extends the input-image receptive fields of boundary units in a convolutional
layer. We evaluate CyCNN and conventional CNN models for classification tasks
on rotated MNIST, CIFAR-10, and SVHN datasets. We show that if there is no data
augmentation during training, CyCNN significantly improves classification
accuracies when compared to conventional CNN models. Our implementation of
CyCNN is publicly available on https://github.com/mcrl/CyCNN.
Related papers
- Model Parallel Training and Transfer Learning for Convolutional Neural Networks by Domain Decomposition [0.0]
Deep convolutional neural networks (CNNs) have been shown to be very successful in a wide range of image processing applications.
Due to their increasing number of model parameters and an increasing availability of large amounts of training data, parallelization strategies to efficiently train complex CNNs are necessary.
arXiv Detail & Related papers (2024-08-26T17:35:01Z) - CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - Sharpend Cosine Similarity based Neural Network for Hyperspectral Image
Classification [0.456877715768796]
Hyperspectral Image Classification (HSIC) is a difficult task due to high inter and intra-class similarity and variability, nested regions, and overlapping.
2D Convolutional Neural Networks (CNN) emerged as a viable network whereas, 3D CNNs are a better alternative due to accurate classification.
This paper introduces Sharpened Cosine Similarity (SCS) concept as an alternative to convolutions in a Neural Network for HSIC.
arXiv Detail & Related papers (2023-05-26T07:04:00Z) - RIC-CNN: Rotation-Invariant Coordinate Convolutional Neural Network [56.42518353373004]
We propose a new convolutional operation, called Rotation-Invariant Coordinate Convolution (RIC-C)
By replacing all standard convolutional layers in a CNN with the corresponding RIC-C, a RIC-CNN can be derived.
It can be observed that RIC-CNN achieves the state-of-the-art classification on the rotated test dataset of MNIST.
arXiv Detail & Related papers (2022-11-21T19:27:02Z) - An Alternative Practice of Tropical Convolution to Traditional
Convolutional Neural Networks [0.5837881923712392]
We propose a new type of CNNs called Tropical Convolutional Neural Networks (TCNNs)
TCNNs are built on tropical convolutions in which the multiplications and additions in conventional convolutional layers are replaced by additions and min/max operations respectively.
We show that TCNN can achieve higher expressive power than ordinary convolutional layers on the MNIST and CIFAR10 image data set.
arXiv Detail & Related papers (2021-03-03T00:13:30Z) - Shape-Tailored Deep Neural Networks [87.55487474723994]
We present Shape-Tailored Deep Neural Networks (ST-DNN)
ST-DNN extend convolutional networks (CNN), which aggregate data from fixed shape (square) neighborhoods, to compute descriptors defined on arbitrarily shaped regions.
We show that ST-DNN are 3-4 orders of magnitude smaller then CNNs used for segmentation.
arXiv Detail & Related papers (2021-02-16T23:32:14Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Visual Commonsense R-CNN [102.5061122013483]
We present a novel unsupervised feature representation learning method, Visual Commonsense Region-based Convolutional Neural Network (VC R-CNN)
VC R-CNN serves as an improved visual region encoder for high-level tasks such as captioning and VQA.
We extensively apply VC R-CNN features in prevailing models of three popular tasks: Image Captioning, VQA, and VCR, and observe consistent performance boosts across them.
arXiv Detail & Related papers (2020-02-27T15:51:19Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.