Adaptive double-phase Rudin--Osher--Fatemi denoising model
- URL: http://arxiv.org/abs/2510.04382v1
- Date: Sun, 05 Oct 2025 22:26:06 GMT
- Title: Adaptive double-phase Rudin--Osher--Fatemi denoising model
- Authors: Wojciech Górny, Michał Łasica, Alexandros Matsoukas,
- Abstract summary: We propose a new image denoising model based on a variable-growth total variation regularization of double-phase type with adaptive weight.<n>It is designed to reduce staircasing with respect to the classical Rudin--Osher--Fatemi model.
- Score: 41.99844472131922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new image denoising model based on a variable-growth total variation regularization of double-phase type with adaptive weight. It is designed to reduce staircasing with respect to the classical Rudin--Osher--Fatemi model, while preserving the edges of the image in a similar fashion. We implement the model and test its performance on synthetic and natural images in 1D and 2D over a range of noise levels.
Related papers
- Robust image segmentation model based on binary level set [3.6985338895569204]
This paper models the illumination term in intensity inhomogeneity images.
To enhance the model's robustness to noisy images, we incorporate the binary level set model into the proposed model.
By introducing the variational operator GL, our model demonstrates better capability in segmenting noisy images.
arXiv Detail & Related papers (2024-03-20T08:33:40Z) - A locally statistical active contour model for SAR image segmentation can be solved by denoising algorithms [0.881121308982678]
We propose a novel locally statistical variational active contour model based on I-divergence-TV denoising model.<n>Inspired by a fast denoising algorithm proposed by Jia-Zhao recently, we propose two fast fixed point algorithms to solve SAR image segmentation question.
arXiv Detail & Related papers (2024-01-10T00:27:14Z) - Gradpaint: Gradient-Guided Inpainting with Diffusion Models [71.47496445507862]
Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved remarkable results in conditional and unconditional image generation.
We present GradPaint, which steers the generation towards a globally coherent image.
We generalizes well to diffusion models trained on various datasets, improving upon current state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2023-09-18T09:36:24Z) - Diffusion Model for Generative Image Denoising [17.897180118637856]
In supervised learning for image denoising, usually the paired clean images and noisy images are collected and synthesised to train a denoising model.
In this paper, we regard the denoising task as a problem of estimating the posterior distribution of clean images conditioned on noisy images.
arXiv Detail & Related papers (2023-02-05T14:53:07Z) - On Distillation of Guided Diffusion Models [94.95228078141626]
We propose an approach to distilling classifier-free guided diffusion models into models that are fast to sample from.
For standard diffusion models trained on the pixelspace, our approach is able to generate images visually comparable to that of the original model.
For diffusion models trained on the latent-space (e.g., Stable Diffusion), our approach is able to generate high-fidelity images using as few as 1 to 4 denoising steps.
arXiv Detail & Related papers (2022-10-06T18:03:56Z) - Towards Bidirectional Arbitrary Image Rescaling: Joint Optimization and
Cycle Idempotence [76.93002743194974]
We propose a method to treat arbitrary rescaling, both upscaling and downscaling, as one unified process.
The proposed model is able to learn upscaling and downscaling simultaneously and achieve bidirectional arbitrary image rescaling.
It is shown to be robust in cycle idempotence test, free of severe degradations in reconstruction accuracy when the downscaling-to-upscaling cycle is applied repetitively.
arXiv Detail & Related papers (2022-03-02T07:42:15Z) - Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising [54.730707387866076]
We introduce Noise2Same, a novel self-supervised denoising framework.
In particular, Noise2Same requires neither J-invariance nor extra information about the noise model.
Our results show that our Noise2Same remarkably outperforms previous self-supervised denoising methods.
arXiv Detail & Related papers (2020-10-22T18:12:26Z) - Iterative regularization algorithms for image denoising with the
TV-Stokes model [4.09305676000817]
We propose a set of iterative regularization algorithms for the TV-Stokes model to restore images from noisy images with Gaussian noise.
We have experimental results that show improvement over the original method in the quality of the restored image.
arXiv Detail & Related papers (2020-09-24T22:55:18Z) - Alternating minimization for a single step TV-Stokes model for image
denoising [4.471370467116141]
The paper presents a fully coupled TV-Stokes model, and propose an algorithm based on alternating minimization of the objective functional.
A convergence analysis is given.
arXiv Detail & Related papers (2020-09-24T22:31:15Z) - Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating
Back-Propagation for Saliency Detection [54.98042023365694]
We propose a noise-aware encoder-decoder framework to disentangle a clean saliency predictor from noisy training examples.
The proposed model consists of two sub-models parameterized by neural networks.
arXiv Detail & Related papers (2020-07-23T18:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.