Incorporating Image Gradients as Secondary Input Associated with Input
Image to Improve the Performance of the CNN Model
- URL: http://arxiv.org/abs/2006.04570v1
- Date: Fri, 5 Jun 2020 14:01:52 GMT
- Title: Incorporating Image Gradients as Secondary Input Associated with Input
Image to Improve the Performance of the CNN Model
- Authors: Vijay Pandey, Shashi Bhushan Jha
- Abstract summary: In existing CNN architectures, only single form of given input is fed to the network.
New architecture has been proposed where given input is passed in more than one form to the network simultaneously.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: CNN is very popular neural network architecture in modern days. It is
primarily most used tool for vision related task to extract the important
features from the given image. Moreover, CNN works as a filter to extract the
important features using convolutional operation in distinct layers. In
existing CNN architectures, to train the network on given input, only single
form of given input is fed to the network. In this paper, new architecture has
been proposed where given input is passed in more than one form to the network
simultaneously by sharing the layers with both forms of input. We incorporate
image gradient as second form of the input associated with the original input
image and allowing both inputs to flow in the network using same number of
parameters to improve the performance of the model for better generalization.
The results of the proposed CNN architecture, applying on diverse set of
datasets such as MNIST, CIFAR10 and CIFAR100 show superior result compared to
the benchmark CNN architecture considering inputs in single form.
Related papers
- Model Parallel Training and Transfer Learning for Convolutional Neural Networks by Domain Decomposition [0.0]
Deep convolutional neural networks (CNNs) have been shown to be very successful in a wide range of image processing applications.
Due to their increasing number of model parameters and an increasing availability of large amounts of training data, parallelization strategies to efficiently train complex CNNs are necessary.
arXiv Detail & Related papers (2024-08-26T17:35:01Z) - CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - Training Convolutional Neural Networks with the Forward-Forward
algorithm [1.74440662023704]
Forward Forward (FF) algorithm has up to now only been used in fully connected networks.
We show how the FF paradigm can be extended to CNNs.
Our FF-trained CNN, featuring a novel spatially-extended labeling technique, achieves a classification accuracy of 99.16% on the MNIST hand-written digits dataset.
arXiv Detail & Related papers (2023-12-22T18:56:35Z) - HAT: Hierarchical Aggregation Transformers for Person Re-identification [87.02828084991062]
We take advantages of both CNNs and Transformers for image-based person Re-ID with high performance.
Work is the first to take advantages of both CNNs and Transformers for image-based person Re-ID.
arXiv Detail & Related papers (2021-07-13T09:34:54Z) - ResMLP: Feedforward networks for image classification with
data-efficient training [73.26364887378597]
We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification.
We will share our code based on the Timm library and pre-trained models.
arXiv Detail & Related papers (2021-05-07T17:31:44Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Combining pretrained CNN feature extractors to enhance clustering of
complex natural images [27.784346095205358]
This paper aims at providing insight on the use of pretrained CNN features for image clustering (IC)
To solve this issue, we propose to rephrase the IC problem as a multi-view clustering (MVC) problem.
We then propose a multi-input neural network architecture that is trained end-to-end to solve the MVC problem effectively.
arXiv Detail & Related papers (2021-01-07T21:23:04Z) - Learning Deep Interleaved Networks with Asymmetric Co-Attention for
Image Restoration [65.11022516031463]
We present a deep interleaved network (DIN) that learns how information at different states should be combined for high-quality (HQ) images reconstruction.
In this paper, we propose asymmetric co-attention (AsyCA) which is attached at each interleaved node to model the feature dependencies.
Our presented DIN can be trained end-to-end and applied to various image restoration tasks.
arXiv Detail & Related papers (2020-10-29T15:32:00Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Efficient and Model-Based Infrared and Visible Image Fusion Via
Algorithm Unrolling [24.83209572888164]
Infrared and visible image fusion (IVIF) expects to obtain images that retain thermal radiation information from infrared images and texture details from visible images.
A model-based convolutional neural network (CNN) model is proposed to overcome the shortcomings of traditional CNN-based IVIF models.
arXiv Detail & Related papers (2020-05-12T16:15:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.