Visual Commonsense R-CNN
- URL: http://arxiv.org/abs/2002.12204v3
- Date: Mon, 27 Apr 2020 04:29:49 GMT
- Title: Visual Commonsense R-CNN
- Authors: Tan Wang, Jianqiang Huang, Hanwang Zhang, Qianru Sun
- Abstract summary: We present a novel unsupervised feature representation learning method, Visual Commonsense Region-based Convolutional Neural Network (VC R-CNN)
VC R-CNN serves as an improved visual region encoder for high-level tasks such as captioning and VQA.
We extensively apply VC R-CNN features in prevailing models of three popular tasks: Image Captioning, VQA, and VCR, and observe consistent performance boosts across them.
- Score: 102.5061122013483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel unsupervised feature representation learning method,
Visual Commonsense Region-based Convolutional Neural Network (VC R-CNN), to
serve as an improved visual region encoder for high-level tasks such as
captioning and VQA. Given a set of detected object regions in an image (e.g.,
using Faster R-CNN), like any other unsupervised feature learning methods
(e.g., word2vec), the proxy training objective of VC R-CNN is to predict the
contextual objects of a region. However, they are fundamentally different: the
prediction of VC R-CNN is by using causal intervention: P(Y|do(X)), while
others are by using the conventional likelihood: P(Y|X). This is also the core
reason why VC R-CNN can learn "sense-making" knowledge like chair can be sat --
while not just "common" co-occurrences such as chair is likely to exist if
table is observed. We extensively apply VC R-CNN features in prevailing models
of three popular tasks: Image Captioning, VQA, and VCR, and observe consistent
performance boosts across them, achieving many new state-of-the-arts. Code and
feature are available at https://github.com/Wangt-CN/VC-R-CNN.
Related papers
- Recurrent Neural Networks for Still Images [0.0]
We argue that RNNs can effectively handle still images by interpreting the pixels as a sequence.
We introduce a novel RNN design tailored for two-dimensional inputs, such as images, and a custom version of BiDirectional RNN (BiRNN) that is more memory-efficient than traditional implementations.
arXiv Detail & Related papers (2024-09-10T06:07:20Z) - RIC-CNN: Rotation-Invariant Coordinate Convolutional Neural Network [56.42518353373004]
We propose a new convolutional operation, called Rotation-Invariant Coordinate Convolution (RIC-C)
By replacing all standard convolutional layers in a CNN with the corresponding RIC-C, a RIC-CNN can be derived.
It can be observed that RIC-CNN achieves the state-of-the-art classification on the rotated test dataset of MNIST.
arXiv Detail & Related papers (2022-11-21T19:27:02Z) - Scalable Neural Video Representations with Learnable Positional Features [73.51591757726493]
We show how to train neural representations with learnable positional features (NVP) that effectively amortize a video as latent codes.
We demonstrate the superiority of NVP on the popular UVG benchmark; compared with prior arts, NVP not only trains 2 times faster (less than 5 minutes) but also exceeds their encoding quality as 34.07rightarrow$34.57 (measured with the PSNR metric)
arXiv Detail & Related papers (2022-10-13T08:15:08Z) - Towards a General Purpose CNN for Long Range Dependencies in
$\mathrm{N}$D [49.57261544331683]
We propose a single CNN architecture equipped with continuous convolutional kernels for tasks on arbitrary resolution, dimensionality and length without structural changes.
We show the generality of our approach by applying the same CCNN to a wide set of tasks on sequential (1$mathrmD$) and visual data (2$mathrmD$)
Our CCNN performs competitively and often outperforms the current state-of-the-art across all tasks considered.
arXiv Detail & Related papers (2022-06-07T15:48:02Z) - Rethinking Nearest Neighbors for Visual Classification [56.00783095670361]
k-NN is a lazy learning method that aggregates the distance between the test image and top-k neighbors in a training set.
We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps.
Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration.
arXiv Detail & Related papers (2021-12-15T20:15:01Z) - Dynamic Gesture Recognition [0.0]
It is possible to use machine learning to classify images and/or videos instead of the traditional computer vision algorithms.
The aim of this project is to builda symbiosis between a convolutional neural network (CNN) and a recurrent neural network (RNN)
arXiv Detail & Related papers (2021-09-20T09:45:29Z) - Recurrent Neural Network from Adder's Perspective: Carry-lookahead RNN [9.20540910698296]
We discuss the similarities between recurrent neural network (RNN) and serial adder.
Inspired by carry-lookahead adder, we introduce carry-lookahead module to RNN, which makes it possible for RNN to run in parallel.
arXiv Detail & Related papers (2021-06-22T12:28:33Z) - Multichannel CNN with Attention for Text Classification [5.1545224296246275]
This paper proposes Attention-based Multichannel Convolutional Neural Network (AMCNN) for text classification.
AMCNN uses a bi-directional long short-term memory to encode the history and future information of words into high dimensional representations.
The experimental results on the benchmark datasets demonstrate that AMCNN achieves better performance than state-of-the-art methods.
arXiv Detail & Related papers (2020-06-29T16:37:51Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.