PatchNR: Learning from Small Data by Patch Normalizing Flow
Regularization
- URL: http://arxiv.org/abs/2205.12021v1
- Date: Tue, 24 May 2022 12:14:26 GMT
- Title: PatchNR: Learning from Small Data by Patch Normalizing Flow
Regularization
- Authors: Fabian Altekr\"uger, Alexander Denker, Paul Hagemann, Johannes
Hertrich, Peter Maass, Gabriele Steidl
- Abstract summary: We introduce a regularizer for the variational modeling of inverse problems in imaging based on normalizing flows.
Our regularizer, called patchNR, involves a normalizing flow learned on patches of very few images.
- Score: 57.37911115888587
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning neural networks using only a small amount of data is an important
ongoing research topic with tremendous potential for applications. In this
paper, we introduce a regularizer for the variational modeling of inverse
problems in imaging based on normalizing flows. Our regularizer, called
patchNR, involves a normalizing flow learned on patches of very few images. The
subsequent reconstruction method is completely unsupervised and the same
regularizer can be used for different forward operators acting on the same
class of images. By investigating the distribution of patches versus those of
the whole image class, we prove that our variational model is indeed a MAP
approach. Our model can be generalized to conditional patchNRs, if additional
supervised information is available. Numerical examples for low-dose CT,
limited-angle CT and superresolution of material images demonstrate that our
method provides high quality results among unsupervised methods, but requires
only few data.
Related papers
- Learning from small data sets: Patch-based regularizers in inverse
problems for image reconstruction [1.1650821883155187]
Recent advances in machine learning require a huge amount of data and computer capacity to train the networks.
Our paper addresses the issue of learning from small data sets by taking patches of very few images into account.
We show how we can achieve uncertainty quantification by approximating the posterior using Langevin Monte Carlo methods.
arXiv Detail & Related papers (2023-12-27T15:30:05Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - DOLCE: A Model-Based Probabilistic Diffusion Framework for Limited-Angle
CT Reconstruction [42.028139152832466]
Limited-Angle Computed Tomography (LACT) is a non-destructive evaluation technique used in a variety of applications ranging from security to medicine.
We present DOLCE, a new deep model-based framework for LACT that uses a conditional diffusion model as an image prior.
arXiv Detail & Related papers (2022-11-22T15:30:38Z) - Compressed Sensing MRI Reconstruction Regularized by VAEs with
Structured Image Covariance [7.544757765701024]
This paper investigates how generative models, trained on ground-truth images, can be used changesas priors for inverse problems.
We utilize variational autoencoders (VAEs) that generate not only an image but also a covariance uncertainty matrix for each image.
We compare our proposed learned regularization against other unlearned regularization approaches and unsupervised and supervised deep learning methods.
arXiv Detail & Related papers (2022-10-26T09:51:49Z) - JPEG Artifact Correction using Denoising Diffusion Restoration Models [110.1244240726802]
We build upon Denoising Diffusion Restoration Models (DDRM) and propose a method for solving some non-linear inverse problems.
We leverage the pseudo-inverse operator used in DDRM and generalize this concept for other measurement operators.
arXiv Detail & Related papers (2022-09-23T23:47:00Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations [2.3204178451683264]
Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
arXiv Detail & Related papers (2021-05-28T21:34:04Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.