Deep Collective Learning: Learning Optimal Inputs and Weights Jointly in
Deep Neural Networks
- URL: http://arxiv.org/abs/2009.07988v1
- Date: Thu, 17 Sep 2020 00:33:04 GMT
- Title: Deep Collective Learning: Learning Optimal Inputs and Weights Jointly in
Deep Neural Networks
- Authors: Xiang Deng and Zhongfei (Mark) Zhang
- Abstract summary: In deep learning and computer vision literature, visual data are always represented in a manually designed coding scheme.
We boldly question whether the manually designed inputs are good for DNN training for different tasks.
We propose the paradigm of em deep collective learning which aims to learn the weights of DNNs and the inputs to DNNs simultaneously for given tasks.
- Score: 5.6592403195043826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is well observed that in deep learning and computer vision literature,
visual data are always represented in a manually designed coding scheme (eg.,
RGB images are represented as integers ranging from 0 to 255 for each channel)
when they are input to an end-to-end deep neural network (DNN) for any learning
task. We boldly question whether the manually designed inputs are good for DNN
training for different tasks and study whether the input to a DNN can be
optimally learned end-to-end together with learning the weights of the DNN. In
this paper, we propose the paradigm of {\em deep collective learning} which
aims to learn the weights of DNNs and the inputs to DNNs simultaneously for
given tasks. We note that collective learning has been implicitly but widely
used in natural language processing while it has almost never been studied in
computer vision. Consequently, we propose the lookup vision networks
(Lookup-VNets) as a solution to deep collective learning in computer vision.
This is achieved by associating each color in each channel with a vector in
lookup tables. As learning inputs in computer vision has almost never been
studied in the existing literature, we explore several aspects of this question
through varieties of experiments on image classification tasks. Experimental
results on four benchmark datasets, i.e., CIFAR-10, CIFAR-100, Tiny ImageNet,
and ImageNet (ILSVRC2012) have shown several surprising characteristics of
Lookup-VNets and have demonstrated the advantages and promise of Lookup-VNets
and deep collective learning.
Related papers
- Active Learning on Neural Networks through Interactive Generation of
Digit Patterns and Visual Representation [9.127485315153312]
An interactive learning system is designed to create digit patterns and recognize them in real time.
An evaluation with multiple datasets is conducted to determine its usability for active learning.
arXiv Detail & Related papers (2023-10-02T19:21:24Z) - Deep Neural Networks in Video Human Action Recognition: A Review [21.00217656391331]
Video behavior recognition is one of the most foundational tasks of computer vision.
Deep neural networks are built for recognizing pixel-level information such as images with RGB, RGB-D, or optical flow formats.
In our article, the performance of deep neural networks surpassed most of the techniques in the feature learning and extraction tasks.
arXiv Detail & Related papers (2023-05-25T03:54:41Z) - A large scale multi-view RGBD visual affordance learning dataset [4.3773754388936625]
We introduce a large scale multi-view RGBD visual affordance learning dataset.
This is the first ever and the largest multi-view RGBD visual affordance learning dataset.
Several state-of-the-art deep learning networks are evaluated each for affordance recognition and segmentation tasks.
arXiv Detail & Related papers (2022-03-26T14:31:35Z) - Exploring the Common Principal Subspace of Deep Features in Neural
Networks [50.37178960258464]
We find that different Deep Neural Networks (DNNs) trained with the same dataset share a common principal subspace in latent spaces.
Specifically, we design a new metric $mathcalP$-vector to represent the principal subspace of deep features learned in a DNN.
Small angles (with cosine close to $1.0$) have been found in the comparisons between any two DNNs trained with different algorithms/architectures.
arXiv Detail & Related papers (2021-10-06T15:48:32Z) - Graph Neural Networks for Natural Language Processing: A Survey [64.36633422999905]
We present a comprehensive overview onGraph Neural Networks (GNNs) for Natural Language Processing.
We propose a new taxonomy of GNNs for NLP, which organizes existing research of GNNs for NLP along three axes: graph construction,graph representation learning, and graph based encoder-decoder models.
arXiv Detail & Related papers (2021-06-10T23:59:26Z) - What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space [88.37185513453758]
We propose a method to visualize and understand the class-wise knowledge learned by deep neural networks (DNNs) under different settings.
Our method searches for a single predictive pattern in the pixel space to represent the knowledge learned by the model for each class.
In the adversarial setting, we show that adversarially trained models tend to learn more simplified shape patterns.
arXiv Detail & Related papers (2021-01-18T06:38:41Z) - A Framework for Fast Scalable BNN Inference using Googlenet and Transfer
Learning [0.0]
This thesis aims to achieve high accuracy in object detection with good real-time performance.
The binarized neural network has shown high performance in various vision tasks such as image classification, object detection, and semantic segmentation.
Results show that the accuracy of objects detected by the transfer learning method is more when compared to the existing methods.
arXiv Detail & Related papers (2021-01-04T06:16:52Z) - A Practical Tutorial on Graph Neural Networks [49.919443059032226]
Graph neural networks (GNNs) have recently grown in popularity in the field of artificial intelligence (AI)
This tutorial exposes the power and novelty of GNNs to AI practitioners.
arXiv Detail & Related papers (2020-10-11T12:36:17Z) - Applications of Deep Neural Networks with Keras [0.0]
Deep learning allows a neural network to learn hierarchies of information in a way that is like the function of the human brain.
This course will introduce the student to classic neural network structures, Conversa Neural Networks (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Neural Networks (GRU), General Adrial Networks (GAN)
arXiv Detail & Related papers (2020-09-11T22:09:10Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - $\Pi-$nets: Deep Polynomial Neural Networks [86.36557534288535]
$Pi$-Nets are neural networks in which the output is a high-order of the input.
We empirically demonstrate that $Pi$-Nets have better representation power than standard DCNNs.
Our framework elucidates why recent generative models, such as StyleGAN, improve upon their predecessors.
arXiv Detail & Related papers (2020-03-08T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.