Pseudo Rehearsal using non photo-realistic images
- URL: http://arxiv.org/abs/2004.13414v1
- Date: Tue, 28 Apr 2020 10:44:57 GMT
- Title: Pseudo Rehearsal using non photo-realistic images
- Authors: Bhasker Sri Harsha Suri, Kalidas Yeturu
- Abstract summary: Deep Neural networks forget previously learnt tasks when they are faced with learning new tasks.
Rehearsing the neural network with the training data of the previous task can protect the network from catastrophic forgetting.
In an image classification setting, while current techniques try to generate synthetic data that is photo-realistic, we demonstrated that Neural networks can be rehearsed on data that is not photo-realistic and still achieve good retention of the previous task.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural networks forget previously learnt tasks when they are faced with
learning new tasks. This is called catastrophic forgetting. Rehearsing the
neural network with the training data of the previous task can protect the
network from catastrophic forgetting. Since rehearsing requires the storage of
entire previous data, Pseudo rehearsal was proposed, where samples belonging to
the previous data are generated synthetically for rehearsal. In an image
classification setting, while current techniques try to generate synthetic data
that is photo-realistic, we demonstrated that Neural networks can be rehearsed
on data that is not photo-realistic and still achieve good retention of the
previous task. We also demonstrated that forgoing the constraint of having
photo realism in the generated data can result in a significant reduction in
the consumption of computational and memory resources for pseudo rehearsal.
Related papers
- Understanding Reconstruction Attacks with the Neural Tangent Kernel and
Dataset Distillation [110.61853418925219]
We build a stronger version of the dataset reconstruction attack and show how it can provably recover the emphentire training set in the infinite width regime.
We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset.
These reconstruction attacks can be used for textitdataset distillation, that is, we can retrain on reconstructed images and obtain high predictive accuracy.
arXiv Detail & Related papers (2023-02-02T21:41:59Z) - Reconstructing Training Data from Trained Neural Networks [42.60217236418818]
We show in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier.
We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods.
arXiv Detail & Related papers (2022-06-15T18:35:16Z) - Neural Maximum A Posteriori Estimation on Unpaired Data for Motion
Deblurring [87.97330195531029]
We propose a Neural Maximum A Posteriori (NeurMAP) estimation framework for training neural networks to recover blind motion information and sharp content from unpaired data.
The proposed NeurMAP is an approach to existing deblurring neural networks, and is the first framework that enables training image deblurring networks on unpaired datasets.
arXiv Detail & Related papers (2022-04-26T08:09:47Z) - Task2Sim : Towards Effective Pre-training and Transfer from Synthetic
Data [74.66568380558172]
We study the transferability of pre-trained models based on synthetic data generated by graphics simulators to downstream tasks.
We introduce Task2Sim, a unified model mapping downstream task representations to optimal simulation parameters.
It learns this mapping by training to find the set of best parameters on a set of "seen" tasks.
Once trained, it can then be used to predict best simulation parameters for novel "unseen" tasks in one shot.
arXiv Detail & Related papers (2021-11-30T19:25:27Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Targeted Gradient Descent: A Novel Method for Convolutional Neural
Networks Fine-tuning and Online-learning [9.011106198253053]
A convolutional neural network (ConvNet) is usually trained and then tested using images drawn from the same distribution.
To generalize a ConvNet to various tasks often requires a complete training dataset that consists of images drawn from different tasks.
We present Targeted Gradient Descent (TGD), a novel fine-tuning method that can extend a pre-trained network to a new task without revisiting data from the previous task.
arXiv Detail & Related papers (2021-09-29T21:22:09Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z) - Towards Deep Unsupervised SAR Despeckling with Blind-Spot Convolutional
Neural Networks [30.410981386006394]
Deep learning techniques have outperformed classical model-based despeckling algorithms.
In this paper, we propose a self-supervised Bayesian despeckling method.
We show that the performance of the proposed network is very close to the supervised training approach on synthetic data and competitive on real data.
arXiv Detail & Related papers (2020-01-15T12:21:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.