Learning Regularized Multi-Scale Feature Flow for High Dynamic Range
Imaging
- URL: http://arxiv.org/abs/2207.02539v1
- Date: Wed, 6 Jul 2022 09:37:28 GMT
- Title: Learning Regularized Multi-Scale Feature Flow for High Dynamic Range
Imaging
- Authors: Qian Ye, Masanori Suganuma, Jun Xiao, Takayuki Okatani
- Abstract summary: We propose a deep network that tries to learn multi-scale feature flow guided by the regularized loss.
It first extracts multi-scale features and then aligns features from non-reference images.
After alignment, we use residual channel attention blocks to merge the features from different images.
- Score: 29.691689596845112
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstructing ghosting-free high dynamic range (HDR) images of dynamic
scenes from a set of multi-exposure images is a challenging task, especially
with large object motion and occlusions, leading to visible artifacts using
existing methods. To address this problem, we propose a deep network that tries
to learn multi-scale feature flow guided by the regularized loss. It first
extracts multi-scale features and then aligns features from non-reference
images. After alignment, we use residual channel attention blocks to merge the
features from different images. Extensive qualitative and quantitative
comparisons show that our approach achieves state-of-the-art performance and
produces excellent results where color artifacts and geometric distortions are
significantly reduced.
Related papers
- Multi-scale Frequency Enhancement Network for Blind Image Deblurring [7.198959621445282]
We propose a multi-scale frequency enhancement network (MFENet) for blind image deblurring.
To capture the multi-scale spatial and channel information of blurred images, we introduce a multi-scale feature extraction module (MS-FE) based on depthwise separable convolutions.
We demonstrate that the proposed method achieves superior deblurring performance in both visual quality and objective evaluation metrics.
arXiv Detail & Related papers (2024-11-11T11:49:18Z) - Robust Network Learning via Inverse Scale Variational Sparsification [55.64935887249435]
We introduce an inverse scale variational sparsification framework within a time-continuous inverse scale space formulation.
Unlike frequency-based methods, our approach not only removes noise by smoothing small-scale features.
We show the efficacy of our approach through enhanced robustness against various noise types.
arXiv Detail & Related papers (2024-09-27T03:17:35Z) - High Dynamic Range Imaging of Dynamic Scenes with Saturation
Compensation but without Explicit Motion Compensation [20.911738532410766]
High dynamic range (LDR) imaging is a highly challenging task since a large amount of information is lost due to the limitations of camera sensors.
For HDR imaging, some methods capture multiple low dynamic range (LDR) images with altering exposures to aggregate more information.
Most existing methods focus on motion compensation to reduce the ghosting artifacts, but they still produce unsatisfying results.
arXiv Detail & Related papers (2023-08-22T02:44:03Z) - Gated Multi-Resolution Transfer Network for Burst Restoration and
Enhancement [75.25451566988565]
We propose a novel Gated Multi-Resolution Transfer Network (GMTNet) to reconstruct a spatially precise high-quality image from a burst of low-quality raw images.
Detailed experimental analysis on five datasets validates our approach and sets a state-of-the-art for burst super-resolution, burst denoising, and low-light burst enhancement.
arXiv Detail & Related papers (2023-04-13T17:54:00Z) - Deep Progressive Feature Aggregation Network for High Dynamic Range
Imaging [24.94466716276423]
We propose a deep progressive feature aggregation network for improving HDR imaging quality in dynamic scenes.
Our method implicitly samples high-correspondence features and aggregates them in a coarse-to-fine manner for alignment.
Experiments show that our proposed method can achieve state-of-the-art performance under different scenes.
arXiv Detail & Related papers (2022-08-04T04:37:35Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - HDRUNet: Single Image HDR Reconstruction with Denoising and
Dequantization [39.82945546614887]
We propose a novel learning-based approach using a spatially dynamic encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single image HDR reconstruction.
Our method achieves the state-of-the-art performance in quantitative comparisons and visual quality.
arXiv Detail & Related papers (2021-05-27T12:12:34Z) - DeepMultiCap: Performance Capture of Multiple Characters Using Sparse
Multiview Cameras [63.186486240525554]
DeepMultiCap is a novel method for multi-person performance capture using sparse multi-view cameras.
Our method can capture time varying surface details without the need of using pre-scanned template models.
arXiv Detail & Related papers (2021-05-01T14:32:13Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.