DreamTeacher: Pretraining Image Backbones with Deep Generative Models
- URL: http://arxiv.org/abs/2307.07487v1
- Date: Fri, 14 Jul 2023 17:17:17 GMT
- Title: DreamTeacher: Pretraining Image Backbones with Deep Generative Models
- Authors: Daiqing Li, Huan Ling, Amlan Kar, David Acuna, Seung Wook Kim, Karsten
Kreis, Antonio Torralba, Sanja Fidler
- Abstract summary: We introduce a self-supervised feature representation learning framework that utilizes generative networks for pre-training downstream image backbones.
We investigate two types of knowledge distillation: 1) distilling learned generative features onto target image backbones as an alternative to pretraining these backbones on large labeled datasets such as ImageNet.
We empirically find that our DreamTeacher significantly outperforms existing self-supervised representation learning approaches across the board.
- Score: 103.62397699392346
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we introduce a self-supervised feature representation learning
framework DreamTeacher that utilizes generative networks for pre-training
downstream image backbones. We propose to distill knowledge from a trained
generative model into standard image backbones that have been well engineered
for specific perception tasks. We investigate two types of knowledge
distillation: 1) distilling learned generative features onto target image
backbones as an alternative to pretraining these backbones on large labeled
datasets such as ImageNet, and 2) distilling labels obtained from generative
networks with task heads onto logits of target backbones. We perform extensive
analyses on multiple generative models, dense prediction benchmarks, and
several pre-training regimes. We empirically find that our DreamTeacher
significantly outperforms existing self-supervised representation learning
approaches across the board. Unsupervised ImageNet pre-training with
DreamTeacher leads to significant improvements over ImageNet classification
pre-training on downstream datasets, showcasing generative models, and
diffusion generative models specifically, as a promising approach to
representation learning on large, diverse datasets without requiring manual
annotation.
Related papers
- Premonition: Using Generative Models to Preempt Future Data Changes in
Continual Learning [63.850451635362425]
Continual learning requires a model to adapt to ongoing changes in the data distribution.
We show that the combination of a large language model and an image generation model can similarly provide useful premonitions.
We find that the backbone of our pre-trained networks can learn representations useful for the downstream continual learning problem.
arXiv Detail & Related papers (2024-03-12T06:29:54Z) - Machine Unlearning for Image-to-Image Generative Models [18.952634119351465]
This paper provides a unifying framework for machine unlearning for image-to-image generative models.
We propose a computationally-efficient algorithm, underpinned by rigorous theoretical analysis, that demonstrates negligible performance degradation on the retain samples.
Empirical studies on two large-scale datasets, ImageNet-1K and Places-365, further show that our algorithm does not rely on the availability of the retain samples.
arXiv Detail & Related papers (2024-02-01T05:35:25Z) - Distilling Knowledge from Self-Supervised Teacher by Embedding Graph
Alignment [52.704331909850026]
We formulate a new knowledge distillation framework to transfer the knowledge from self-supervised pre-trained models to any other student network.
Inspired by the spirit of instance discrimination in self-supervised learning, we model the instance-instance relations by a graph formulation in the feature embedding space.
Our distillation scheme can be flexibly applied to transfer the self-supervised knowledge to enhance representation learning on various student networks.
arXiv Detail & Related papers (2022-11-23T19:27:48Z) - Semantic-Aware Generation for Self-Supervised Visual Representation
Learning [116.5814634936371]
We advocate for Semantic-aware Generation (SaGe) to facilitate richer semantics rather than details to be preserved in the generated image.
SaGe complements the target network with view-specific features and thus alleviates the semantic degradation brought by intensive data augmentations.
We execute SaGe on ImageNet-1K and evaluate the pre-trained models on five downstream tasks including nearest neighbor test, linear classification, and fine-scaled image recognition.
arXiv Detail & Related papers (2021-11-25T16:46:13Z) - Towards Learning a Vocabulary of Visual Concepts and Operators using
Deep Neural Networks [0.0]
We analyze the learned feature maps of trained models using MNIST images for achieving more explainable predictions.
We illustrate the idea by generating visual concepts from a Variational Autoencoder trained using MNIST images.
We were able to reduce the reconstruction loss (mean square error) from an initial value of 120 without augmentation to 60 with augmentation.
arXiv Detail & Related papers (2021-09-01T16:34:57Z) - Automated Cleanup of the ImageNet Dataset by Model Consensus,
Explainability and Confident Learning [0.0]
ImageNet was the backbone of various convolutional neural networks (CNNs) trained on ILSVRC12Net.
This paper describes automated applications based on model consensus, explainability and confident learning to correct labeling mistakes.
The ImageNet-Clean improves the model performance by 2-2.4 % for SqueezeNet and EfficientNet-B0 models.
arXiv Detail & Related papers (2021-03-30T13:16:35Z) - Counterfactual Generative Networks [59.080843365828756]
We propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision.
By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background.
We show that the counterfactual images can improve out-of-distribution with a marginal drop in performance on the original classification task.
arXiv Detail & Related papers (2021-01-15T10:23:12Z) - Distilling Visual Priors from Self-Supervised Learning [24.79633121345066]
Convolutional Neural Networks (CNNs) are prone to overfit small training datasets.
We present a novel two-phase pipeline that leverages self-supervised learning and knowledge distillation to improve the generalization ability of CNN models for image classification under the data-deficient setting.
arXiv Detail & Related papers (2020-08-01T13:07:18Z) - From ImageNet to Image Classification: Contextualizing Progress on
Benchmarks [99.19183528305598]
We study how specific design choices in the ImageNet creation process impact the fidelity of the resulting dataset.
Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for.
arXiv Detail & Related papers (2020-05-22T17:39:16Z) - Multi-task pre-training of deep neural networks for digital pathology [8.74883469030132]
We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images.
We show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance.
arXiv Detail & Related papers (2020-05-05T08:50:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.