Analysis of Convolutional Decoder for Image Caption Generation
- URL: http://arxiv.org/abs/2103.04914v1
- Date: Mon, 8 Mar 2021 17:25:31 GMT
- Title: Analysis of Convolutional Decoder for Image Caption Generation
- Authors: Sulabh Katiyar, Samir Kumar Borgohain
- Abstract summary: Convolutional Neural Networks have been proposed for Sequence Modelling tasks such as Image Caption Generation.
Unlike Recurrent Neural Network based Decoder, Convolutional Decoder for Image Captioning does not generally benefit from increase in network depth.
We observe that Convolutional Decoders show performance comparable with Recurrent Decoders only when trained using sentences of smaller length which contain up to 15 words.
- Score: 1.2183405753834562
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently Convolutional Neural Networks have been proposed for Sequence
Modelling tasks such as Image Caption Generation. However, unlike Recurrent
Neural Networks, the performance of Convolutional Neural Networks as Decoders
for Image Caption Generation has not been extensively studied. In this work, we
analyse various aspects of Convolutional Neural Network based Decoders such as
Network complexity and depth, use of Data Augmentation, Attention mechanism,
length of sentences used during training, etc on performance of the model. We
perform experiments using Flickr8k and Flickr30k image captioning datasets and
observe that unlike Recurrent Neural Network based Decoder, Convolutional
Decoder for Image Captioning does not generally benefit from increase in
network depth, in the form of stacked Convolutional Layers, and also the use of
Data Augmentation techniques. In addition, use of Attention mechanism also
provides limited performance gains with Convolutional Decoder. Furthermore, we
observe that Convolutional Decoders show performance comparable with Recurrent
Decoders only when trained using sentences of smaller length which contain up
to 15 words but they have limitations when trained using higher sentence
lengths which suggests that Convolutional Decoders may not be able to model
long-term dependencies efficiently. In addition, the Convolutional Decoder
usually performs poorly on CIDEr evaluation metric as compared to Recurrent
Decoder.
Related papers
- More complex encoder is not all you need [0.882348769487259]
We introduce neU-Net (i.e., not complex encoder U-Net), which incorporates a novel Sub-pixel Convolution for upsampling to construct a powerful decoder.
Our model design achieves excellent results, surpassing other state-of-the-art methods on both the Synapse and ACDC datasets.
arXiv Detail & Related papers (2023-09-20T08:34:38Z) - PottsMGNet: A Mathematical Explanation of Encoder-Decoder Based Neural
Networks [7.668812831777923]
We study the encoder-decoder-based network architecture from the algorithmic perspective.
We use the two-phase Potts model for image segmentation as an example for our explanations.
We show that the resulting discrete PottsMGNet is equivalent to an encoder-decoder-based network.
arXiv Detail & Related papers (2023-07-18T07:48:48Z) - Progressive Fourier Neural Representation for Sequential Video
Compilation [75.43041679717376]
Motivated by continual learning, this work investigates how to accumulate and transfer neural implicit representations for multiple complex video data over sequential encoding sessions.
We propose a novel method, Progressive Fourier Neural Representation (PFNR), that aims to find an adaptive and compact sub-module in Fourier space to encode videos in each training session.
We validate our PFNR method on the UVG8/17 and DAVIS50 video sequence benchmarks and achieve impressive performance gains over strong continual learning baselines.
arXiv Detail & Related papers (2023-06-20T06:02:19Z) - Neural Data-Dependent Transform for Learned Image Compression [72.86505042102155]
We build a neural data-dependent transform and introduce a continuous online mode decision mechanism to jointly optimize the coding efficiency for each individual image.
The experimental results show the effectiveness of the proposed neural-syntax design and the continuous online mode decision mechanism.
arXiv Detail & Related papers (2022-03-09T14:56:48Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Small Lesion Segmentation in Brain MRIs with Subpixel Embedding [105.1223735549524]
We present a method to segment MRI scans of the human brain into ischemic stroke lesion and normal tissues.
We propose a neural network architecture in the form of a standard encoder-decoder where predictions are guided by a spatial expansion embedding network.
arXiv Detail & Related papers (2021-09-18T00:21:17Z) - Dynamic Neural Representational Decoders for High-Resolution Semantic
Segmentation [98.05643473345474]
We propose a novel decoder, termed dynamic neural representational decoder (NRD)
As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks.
This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient.
arXiv Detail & Related papers (2021-07-30T04:50:56Z) - Learned Multi-Resolution Variable-Rate Image Compression with
Octave-based Residual Blocks [15.308823742699039]
We propose a new variable-rate image compression framework, which employs generalized octave convolutions (GoConv) and generalized octave transposed-convolutions (GoTConv)
To enable a single model to operate with different bit rates and to learn multi-rate image features, a new objective function is introduced.
Experimental results show that the proposed framework trained with variable-rate objective function outperforms the standard codecs such as H.265/HEVC-based BPG and state-of-the-art learning-based variable-rate methods.
arXiv Detail & Related papers (2020-12-31T06:26:56Z) - Beyond Single Stage Encoder-Decoder Networks: Deep Decoders for Semantic
Image Segmentation [56.44853893149365]
Single encoder-decoder methodologies for semantic segmentation are reaching their peak in terms of segmentation quality and efficiency per number of layers.
We propose a new architecture based on a decoder which uses a set of shallow networks for capturing more information content.
In order to further improve the architecture we introduce a weight function which aims to re-balance classes to increase the attention of the networks to under-represented objects.
arXiv Detail & Related papers (2020-07-19T18:44:34Z) - Hierarchical Memory Decoding for Video Captioning [43.51506421744577]
Memory network (MemNet) has the advantage of storing long-term information.
MemNet has not been well exploited for video captioning.
In this paper, we devise a novel memory decoder for video captioning.
arXiv Detail & Related papers (2020-02-27T02:48:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.