Analyzing the Sample Complexity of Self-Supervised Image Reconstruction
Methods
- URL: http://arxiv.org/abs/2305.19079v2
- Date: Fri, 27 Oct 2023 14:18:02 GMT
- Title: Analyzing the Sample Complexity of Self-Supervised Image Reconstruction
Methods
- Authors: Tobit Klug, Dogukan Atik, Reinhard Heckel
- Abstract summary: Supervised training of deep neural networks on pairs of clean image and noisy measurement achieves state-of-the-art performance for many image reconstruction tasks.
Self-supervised methods enable training based on noisy measurements only, without clean images.
We analytically show that a model trained with such self-supervised training is as good as the same model trained in a supervised fashion.
- Score: 24.840134419242414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised training of deep neural networks on pairs of clean image and noisy
measurement achieves state-of-the-art performance for many image reconstruction
tasks, but such training pairs are difficult to collect. Self-supervised
methods enable training based on noisy measurements only, without clean images.
In this work, we investigate the cost of self-supervised training in terms of
sample complexity for a class of self-supervised methods that enable the
computation of unbiased estimates of gradients of the supervised loss,
including noise2noise methods. We analytically show that a model trained with
such self-supervised training is as good as the same model trained in a
supervised fashion, but self-supervised training requires more examples than
supervised training. We then study self-supervised denoising and accelerated
MRI empirically and characterize the cost of self-supervised training in terms
of the number of additional samples required, and find that the performance gap
between self-supervised and supervised training vanishes as a function of the
training examples, at a problem-dependent rate, as predicted by our theory.
Related papers
- EfficientTrain++: Generalized Curriculum Learning for Efficient Visual Backbone Training [79.96741042766524]
We reformulate the training curriculum as a soft-selection function.
We show that exposing the contents of natural images can be readily achieved by the intensity of data augmentation.
The resulting method, EfficientTrain++, is simple, general, yet surprisingly effective.
arXiv Detail & Related papers (2024-05-14T17:00:43Z) - Distilled Datamodel with Reverse Gradient Matching [74.75248610868685]
We introduce an efficient framework for assessing data impact, comprising offline training and online evaluation stages.
Our proposed method achieves comparable model behavior evaluation while significantly speeding up the process compared to the direct retraining method.
arXiv Detail & Related papers (2024-04-22T09:16:14Z) - Improving In-Context Few-Shot Learning via Self-Supervised Training [48.801037246764935]
We propose to use self-supervision in an intermediate training stage between pretraining and downstream few-shot usage.
We find that the intermediate self-supervision stage produces models that outperform strong baselines.
arXiv Detail & Related papers (2022-05-03T18:01:07Z) - Better Self-training for Image Classification through Self-supervision [3.492636597449942]
Self-supervision is learning without manual supervision by solving an automatically-generated pretext task.
This paper investigates three ways of incorporating self-supervision into self-training to improve accuracy in image classification.
arXiv Detail & Related papers (2021-09-02T08:24:41Z) - Bootstrapped Self-Supervised Training with Monocular Video for Semantic
Segmentation and Depth Estimation [11.468537169201083]
We formalize a bootstrapped self-supervised learning problem where a system is initially bootstrapped with supervised training on a labeled dataset.
In this work, we leverage temporal consistency between frames in monocular video to perform this bootstrapped self-supervised training.
In addition, we show that the bootstrapped self-supervised training framework can help a network learn depth estimation better than pure supervised training or self-supervised training.
arXiv Detail & Related papers (2021-03-19T21:28:58Z) - Unsupervised Difficulty Estimation with Action Scores [7.6146285961466]
We present a simple method for calculating a difficulty score based on the accumulation of losses for each sample during training.
Our proposed method does not require any modification of the model neither any external supervision, as it can be implemented as callback.
arXiv Detail & Related papers (2020-11-23T15:18:44Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Single-step Adversarial training with Dropout Scheduling [59.50324605982158]
We show that models trained using single-step adversarial training method learn to prevent the generation of single-step adversaries.
Models trained using proposed single-step adversarial training method are robust against both single-step and multi-step adversarial attacks.
arXiv Detail & Related papers (2020-04-18T14:14:00Z) - Regularizers for Single-step Adversarial Training [49.65499307547198]
We propose three types of regularizers that help to learn robust models using single-step adversarial training methods.
Regularizers mitigate the effect of gradient masking by harnessing on properties that differentiate a robust model from that of a pseudo robust model.
arXiv Detail & Related papers (2020-02-03T09:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.