MFSR: Multi-fractal Feature for Super-resolution Reconstruction with Fine Details Recovery
- URL: http://arxiv.org/abs/2502.19797v1
- Date: Thu, 27 Feb 2025 06:12:18 GMT
- Title: MFSR: Multi-fractal Feature for Super-resolution Reconstruction with Fine Details Recovery
- Authors: Lianping Yang, Peng Jiao, Jinshan Pan, Hegui Zhu, Su Guo,
- Abstract summary: Fractal features can capture the rich details of both micro and macro texture structures in an image.<n>We propose a diffusion model-based super-resolution method incorporating fractal features of low-resolution images, named MFSR.<n>Experiments conducted on various face and natural image datasets demonstrate that MFSR can generate higher quality images.
- Score: 30.279489397832943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the process of performing image super-resolution processing, the processing of complex localized information can have a significant impact on the quality of the image generated. Fractal features can capture the rich details of both micro and macro texture structures in an image. Therefore, we propose a diffusion model-based super-resolution method incorporating fractal features of low-resolution images, named MFSR. MFSR leverages these fractal features as reinforcement conditions in the denoising process of the diffusion model to ensure accurate recovery of texture information. MFSR employs convolution as a soft assignment to approximate the fractal features of low-resolution images. This approach is also used to approximate the density feature maps of these images. By using soft assignment, the spatial layout of the image is described hierarchically, encoding the self-similarity properties of the image at different scales. Different processing methods are applied to various types of features to enrich the information acquired by the model. In addition, a sub-denoiser is integrated in the denoising U-Net to reduce the noise in the feature maps during the up-sampling process in order to improve the quality of the generated images. Experiments conducted on various face and natural image datasets demonstrate that MFSR can generate higher quality images.
Related papers
- Leveraging Diffusion Knowledge for Generative Image Compression with Fractal Frequency-Aware Band Learning [16.077768397480902]
generative image compression approaches produce detailed, realistic images instead of the only sharp-looking reconstructions.
We propose a novel deep learning-based generative image compression method injected with diffusion knowledge.
Our method achieves better distortions at high realism and better realism at low distortion than ever before.
arXiv Detail & Related papers (2025-03-14T11:41:33Z) - Multi-scale Frequency Enhancement Network for Blind Image Deblurring [7.198959621445282]
We propose a multi-scale frequency enhancement network (MFENet) for blind image deblurring.
To capture the multi-scale spatial and channel information of blurred images, we introduce a multi-scale feature extraction module (MS-FE) based on depthwise separable convolutions.
We demonstrate that the proposed method achieves superior deblurring performance in both visual quality and objective evaluation metrics.
arXiv Detail & Related papers (2024-11-11T11:49:18Z) - Deep Learning based Optical Image Super-Resolution via Generative Diffusion Models for Layerwise in-situ LPBF Monitoring [4.667646675144656]
We implement generative deep learning models to link low-cost, low-resolution images of the build plate to detailed high-resolution optical images of the build plate.
A conditional latent probabilistic diffusion model is trained to produce realistic high-resolution images of the build plate from low-resolution webcam images.
We also design a framework to recreate the 3D morphology of the printed part and analyze the surface roughness of the reconstructed samples.
arXiv Detail & Related papers (2024-09-20T02:59:25Z) - QMambaBSR: Burst Image Super-Resolution with Query State Space Model [55.56075874424194]
Burst super-resolution aims to reconstruct high-resolution images with higher quality and richer details by fusing the sub-pixel information from multiple burst low-resolution frames.
In BusrtSR, the key challenge lies in extracting the base frame's content complementary sub-pixel details while simultaneously suppressing high-frequency noise disturbance.
We introduce a novel Query Mamba Burst Super-Resolution (QMambaBSR) network, which incorporates a Query State Space Model (QSSM) and Adaptive Up-sampling module (AdaUp)
arXiv Detail & Related papers (2024-08-16T11:15:29Z) - Research on Image Super-Resolution Reconstruction Mechanism based on Convolutional Neural Network [8.739451985459638]
Super-resolution algorithms transform one or more sets of low-resolution images captured from the same scene into high-resolution images.
The extraction of image features and nonlinear mapping methods in the reconstruction process remain challenging for existing algorithms.
The objective is to recover high-quality, high-resolution images from low-resolution images.
arXiv Detail & Related papers (2024-07-18T06:50:39Z) - CoSeR: Bridging Image and Language for Cognitive Super-Resolution [74.24752388179992]
We introduce the Cognitive Super-Resolution (CoSeR) framework, empowering SR models with the capacity to comprehend low-resolution images.
We achieve this by marrying image appearance and language understanding to generate a cognitive embedding.
To further improve image fidelity, we propose a novel condition injection scheme called "All-in-Attention"
arXiv Detail & Related papers (2023-11-27T16:33:29Z) - Bridging Component Learning with Degradation Modelling for Blind Image
Super-Resolution [69.11604249813304]
We propose a components decomposition and co-optimization network (CDCN) for blind SR.
CDCN decomposes the input LR image into structure and detail components in feature space.
We present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process.
arXiv Detail & Related papers (2022-12-03T14:53:56Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Rank-Enhanced Low-Dimensional Convolution Set for Hyperspectral Image
Denoising [50.039949798156826]
This paper tackles the challenging problem of hyperspectral (HS) image denoising.
We propose rank-enhanced low-dimensional convolution set (Re-ConvSet)
We then incorporate Re-ConvSet into the widely-used U-Net architecture to construct an HS image denoising method.
arXiv Detail & Related papers (2022-07-09T13:35:12Z) - DWDN: Deep Wiener Deconvolution Network for Non-Blind Image Deblurring [66.91879314310842]
We propose an explicit deconvolution process in a feature space by integrating a classical Wiener deconvolution framework with learned deep features.
A multi-scale cascaded feature refinement module then predicts the deblurred image from the deconvolved deep features.
We show that the proposed deep Wiener deconvolution network facilitates deblurred results with visibly fewer artifacts and quantitatively outperforms state-of-the-art non-blind image deblurring methods by a wide margin.
arXiv Detail & Related papers (2021-03-18T00:38:11Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.