Learning regularization and intensity-gradient-based fidelity for single
image super resolution
- URL: http://arxiv.org/abs/2003.10689v1
- Date: Tue, 24 Mar 2020 07:03:18 GMT
- Title: Learning regularization and intensity-gradient-based fidelity for single
image super resolution
- Authors: Hu Liang, Shengrong Zhao
- Abstract summary: We study the image degradation progress, and establish degradation model both in intensity and gradient space.
A comprehensive data consistency constraint is established for the reconstruction.
The proposed fidelity term and designed regularization term are embedded into the regularization framework.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How to extract more and useful information for single image super resolution
is an imperative and difficult problem. Learning-based method is a
representative method for such task. However, the results are not so stable as
there may exist big difference between the training data and the test data. The
regularization-based method can effectively utilize the self-information of
observation. However, the degradation model used in regularization-based method
just considers the degradation in intensity space. It may not reconstruct
images well as the degradation reflections in various feature space are not
considered. In this paper, we first study the image degradation progress, and
establish degradation model both in intensity and gradient space. Thus, a
comprehensive data consistency constraint is established for the
reconstruction. Consequently, more useful information can be extracted from the
observed data. Second, the regularization term is learned by a designed
symmetric residual deep neural-network. It can search similar external
information from a predefined dataset avoiding the artificial tendency.
Finally, the proposed fidelity term and designed regularization term are
embedded into the regularization framework. Further, an optimization method is
developed based on the half-quadratic splitting method and the pseudo conjugate
method. Experimental results indicated that the subjective and the objective
metric corresponding to the proposed method were better than those obtained by
the comparison methods.
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - Unsupervised Training of Convex Regularizers using Maximum Likelihood Estimation [12.625383613718636]
We propose an unsupervised approach using maximum marginal likelihood estimation to train a convex neural network-based image regularization term directly on noisy measurements.
Experiments demonstrate that the proposed method produces priors that are near competitive when compared to the analogous supervised training method for various image corruption operators.
arXiv Detail & Related papers (2024-04-08T12:27:00Z) - Fast and Stable Diffusion Inverse Solver with History Gradient Update [28.13197297970759]
We introduce the incorporation of historical gradients into this optimization process, termed History Gradient Update (HGU)
Experimental results demonstrate that, compared to previous sampling algorithms, sampling algorithms with HGU achieves state-of-the-art results in medical image reconstruction.
arXiv Detail & Related papers (2023-07-22T12:37:34Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - Unsupervised feature selection via self-paced learning and low-redundant
regularization [6.083524716031565]
An unsupervised feature selection is proposed by integrating the framework of self-paced learning and subspace learning.
The convergence of the method is proved theoretically and experimentally.
The experimental results show that the proposed method can improve the performance of clustering methods and outperform other compared algorithms.
arXiv Detail & Related papers (2021-12-14T08:28:19Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Score-based diffusion models for accelerated MRI [35.3148116010546]
We introduce a way to sample data from a conditional distribution given the measurements, such that the model can be readily used for solving inverse problems in imaging.
Our model requires magnitude images only for training, and yet is able to reconstruct complex-valued data, and even extends to parallel imaging.
arXiv Detail & Related papers (2021-10-08T08:42:03Z) - An Adaptive Framework for Learning Unsupervised Depth Completion [59.17364202590475]
We present a method to infer a dense depth map from a color image and associated sparse depth measurements.
We show that regularization and co-visibility are related via the fitness of the model to data and can be unified into a single framework.
arXiv Detail & Related papers (2021-06-06T02:27:55Z) - Deep Dimension Reduction for Supervised Representation Learning [51.10448064423656]
We propose a deep dimension reduction approach to learning representations with essential characteristics.
The proposed approach is a nonparametric generalization of the sufficient dimension reduction method.
We show that the estimated deep nonparametric representation is consistent in the sense that its excess risk converges to zero.
arXiv Detail & Related papers (2020-06-10T14:47:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.