Learning to See by Looking at Noise
- URL: http://arxiv.org/abs/2106.05963v1
- Date: Thu, 10 Jun 2021 17:56:46 GMT
- Title: Learning to See by Looking at Noise
- Authors: Manel Baradad, Jonas Wulff, Tongzhou Wang, Phillip Isola, Antonio
Torralba
- Abstract summary: We investigate a suite of image generation models that produce images from simple random processes.
These are then used as training data for a visual representation learner with a contrastive loss.
Our findings show that it is important for the noise to capture certain structural properties of real data but that good performance can be achieved even with processes that are far from realistic.
- Score: 87.12788334473295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current vision systems are trained on huge datasets, and these datasets come
with costs: curation is expensive, they inherit human biases, and there are
concerns over privacy and usage rights. To counter these costs, interest has
surged in learning from cheaper data sources, such as unlabeled images. In this
paper we go a step further and ask if we can do away with real image datasets
entirely, instead learning from noise processes. We investigate a suite of
image generation models that produce images from simple random processes. These
are then used as training data for a visual representation learner with a
contrastive loss. We study two types of noise processes, statistical image
models and deep generative models under different random initializations. Our
findings show that it is important for the noise to capture certain structural
properties of real data but that good performance can be achieved even with
processes that are far from realistic. We also find that diversity is a key
property to learn good representations. Datasets, models, and code are
available at https://mbaradad.github.io/learning_with_noise.
Related papers
- Community Forensics: Using Thousands of Generators to Train Fake Image Detectors [15.166026536032142]
One of the key challenges of detecting AI-generated images is spotting images that have been created by previously unseen generative models.
We propose a new dataset that is significantly larger and more diverse than prior work.
The resulting dataset contains 2.7M images that have been sampled from 4803 different models.
arXiv Detail & Related papers (2024-11-06T18:59:41Z) - Robust Neural Processes for Noisy Data [1.7268667700090563]
We study the behavior of in-context learning models when data is contaminated by noise.
We find that the models that perform best on clean data, are different than the models that perform best on noisy data.
We propose a simple method to train NP models that makes them more robust to noisy data.
arXiv Detail & Related papers (2024-11-03T20:00:55Z) - Deep Image Composition Meets Image Forgery [0.0]
Image forgery has been studied for many years.
Deep learning models require large amounts of labeled data for training.
We use state of the art image composition deep learning models to generate spliced images close to the quality of real-life manipulations.
arXiv Detail & Related papers (2024-04-03T17:54:37Z) - NoiseTransfer: Image Noise Generation with Contrastive Embeddings [9.322843611215486]
We propose a new generative model that can synthesize noisy images with multiple different noise distributions.
We adopt recent contrastive learning to learn distinguishable latent features of the noise.
Our model can generate new noisy images by transferring the noise characteristics solely from a single reference noisy image.
arXiv Detail & Related papers (2023-01-31T11:09:15Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - IDR: Self-Supervised Image Denoising via Iterative Data Refinement [66.5510583957863]
We present a practical unsupervised image denoising method to achieve state-of-the-art denoising performance.
Our method only requires single noisy images and a noise model, which is easily accessible in practical raw image denoising.
To evaluate raw image denoising performance in real-world applications, we build a high-quality raw image dataset SenseNoise-500 that contains 500 real-life scenes.
arXiv Detail & Related papers (2021-11-29T07:22:53Z) - Curious Representation Learning for Embodied Intelligence [81.21764276106924]
Self-supervised representation learning has achieved remarkable success in recent years.
Yet to build truly intelligent agents, we must construct representation learning algorithms that can learn from environments.
We propose a framework, curious representation learning, which jointly learns a reinforcement learning policy and a visual representation model.
arXiv Detail & Related papers (2021-05-03T17:59:20Z) - From ImageNet to Image Classification: Contextualizing Progress on
Benchmarks [99.19183528305598]
We study how specific design choices in the ImageNet creation process impact the fidelity of the resulting dataset.
Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for.
arXiv Detail & Related papers (2020-05-22T17:39:16Z) - CycleISP: Real Image Restoration via Improved Data Synthesis [166.17296369600774]
We present a framework that models camera imaging pipeline in forward and reverse directions.
By training a new image denoising network on realistic synthetic data, we achieve the state-of-the-art performance on real camera benchmark datasets.
arXiv Detail & Related papers (2020-03-17T15:20:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.