Collaborative Method for Incremental Learning on Classification and
Generation
- URL: http://arxiv.org/abs/2010.15378v1
- Date: Thu, 29 Oct 2020 06:34:53 GMT
- Title: Collaborative Method for Incremental Learning on Classification and
Generation
- Authors: Byungju Kim, Jaeyoung Lee, Kyungsu Kim, Sungjin Kim and Junmo Kim
- Abstract summary: We introduce a novel algorithm, Incremental Class Learning with Attribute Sharing (ICLAS), for incremental class learning with deep neural networks.
As one of its component, incGAN, can generate images with increased variety compared with the training data.
Under challenging environment of data deficiency, ICLAS incrementally trains classification and the generation networks.
- Score: 32.07222897378187
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although well-trained deep neural networks have shown remarkable performance
on numerous tasks, they rapidly forget what they have learned as soon as they
begin to learn with additional data with the previous data stop being provided.
In this paper, we introduce a novel algorithm, Incremental Class Learning with
Attribute Sharing (ICLAS), for incremental class learning with deep neural
networks. As one of its component, we also introduce a generative model,
incGAN, which can generate images with increased variety compared with the
training data. Under challenging environment of data deficiency, ICLAS
incrementally trains classification and the generation networks. Since ICLAS
trains both networks, our algorithm can perform multiple times of incremental
class learning. The experiments on MNIST dataset demonstrate the advantages of
our algorithm.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - SA-CNN: Application to text categorization issues using simulated
annealing-based convolutional neural network optimization [0.0]
Convolutional neural networks (CNNs) are a representative class of deep learning algorithms.
We introduce SA-CNN neural networks for text classification tasks based on Text-CNN neural networks.
arXiv Detail & Related papers (2023-03-13T14:27:34Z) - How to Train Your Neural Network: A Comparative Evaluation [1.3654846342364304]
We discuss and compare current state-of-the-art frameworks for large scale distributed deep learning.
We present empirical results comparing their performance on large image and language training tasks.
Based on our results, we discuss algorithmic and implementation portions of each framework which hinder performance.
arXiv Detail & Related papers (2021-11-09T04:24:42Z) - Train your classifier first: Cascade Neural Networks Training from upper
layers to lower layers [54.47911829539919]
We develop a novel top-down training method which can be viewed as an algorithm for searching for high-quality classifiers.
We tested this method on automatic speech recognition (ASR) tasks and language modelling tasks.
The proposed method consistently improves recurrent neural network ASR models on Wall Street Journal, self-attention ASR models on Switchboard, and AWD-LSTM language models on WikiText-2.
arXiv Detail & Related papers (2021-02-09T08:19:49Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Dataset Meta-Learning from Kernel Ridge-Regression [18.253682891579402]
Kernel Inducing Points (KIP) can compress datasets by one or two orders of magnitude.
KIP-learned datasets are transferable to the training of finite-width neural networks even beyond the lazy-training regime.
arXiv Detail & Related papers (2020-10-30T18:54:04Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z) - Online Sequential Extreme Learning Machines: Features Combined From
Hundreds of Midlayers [0.0]
In this paper, we develop an algorithm called hierarchal online sequential learning algorithm (H-OS-ELM)
The algorithm can learn chunk by chunk with fixed or varying block size.
arXiv Detail & Related papers (2020-06-12T00:50:04Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.