Embracing Compact and Robust Architectures for Multi-Exposure Image
Fusion
- URL: http://arxiv.org/abs/2305.12236v1
- Date: Sat, 20 May 2023 17:01:52 GMT
- Title: Embracing Compact and Robust Architectures for Multi-Exposure Image
Fusion
- Authors: Zhu Liu and Jinyuan Liu and Guanyao Wu and Xin Fan and Risheng Liu
- Abstract summary: We propose a search-based paradigm, involving self-alignment and detail repletion modules for robust multi-exposure image fusion.
By utilizing scene relighting and deformable convolutions, the self-alignment module can accurately align images despite camera movement.
We realize the state-of-the-art performance in comparison to various competitive schemes, yielding a 4.02% and 29.34% improvement in PSNR for general and misaligned scenarios.
- Score: 50.598654017728045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, deep learning-based methods have achieved remarkable
progress in multi-exposure image fusion. However, existing methods rely on
aligned image pairs, inevitably generating artifacts when faced with device
shaking in real-world scenarios. Moreover, these learning-based methods are
built on handcrafted architectures and operations by increasing network depth
or width, neglecting different exposure characteristics. As a result, these
direct cascaded architectures with redundant parameters fail to achieve highly
effective inference time and lead to massive computation. To alleviate these
issues, in this paper, we propose a search-based paradigm, involving
self-alignment and detail repletion modules for robust multi-exposure image
fusion. By utilizing scene relighting and deformable convolutions, the
self-alignment module can accurately align images despite camera movement.
Furthermore, by imposing a hardware-sensitive constraint, we introduce neural
architecture search to discover compact and efficient networks, investigating
effective feature representation for fusion. We realize the state-of-the-art
performance in comparison to various competitive schemes, yielding a 4.02% and
29.34% improvement in PSNR for general and misaligned scenarios, respectively,
while reducing inference time by 68.1%. The source code will be available at
https://github.com/LiuZhu-CV/CRMEF.
Related papers
- Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - BusReF: Infrared-Visible images registration and fusion focus on
reconstructible area using one set of features [39.575353043949725]
In a scenario where multi-modal cameras are operating together, the problem of working with non-aligned images cannot be avoided.
Existing image fusion algorithms rely heavily on strictly registered input image pairs to produce more precise fusion results.
This paper aims to address the problem of image registration and fusion in a single framework, called BusRef.
arXiv Detail & Related papers (2023-12-30T17:32:44Z) - Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for
Loss-free Multi-Exposure Image Fusion [60.221404321514086]
Multi-exposure image fusion (MEF) has emerged as a prominent solution to address the limitations of digital imaging in representing varied exposure levels.
This paper presents a Hybrid-Supervised Dual-Search approach for MEF, dubbed HSDS-MEF, which introduces a bi-level optimization search scheme for automatic design of both network structures and loss functions.
arXiv Detail & Related papers (2023-09-03T08:07:26Z) - A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion [69.10255211811007]
We present a Task-guided, Implicit-searched and Meta- generalizationd (TIM) deep model to address the image fusion problem in a challenging real-world scenario.
Specifically, we propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion.
Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency.
arXiv Detail & Related papers (2023-05-25T08:54:08Z) - LRRNet: A Novel Representation Learning Guided Fusion Network for
Infrared and Visible Images [98.36300655482196]
We formulate the fusion task mathematically, and establish a connection between its optimal solution and the network architecture that can implement it.
In particular we adopt a learnable representation approach to the fusion task, in which the construction of the fusion network architecture is guided by the optimisation algorithm producing the learnable model.
Based on this novel network architecture, an end-to-end lightweight fusion network is constructed to fuse infrared and visible light images.
arXiv Detail & Related papers (2023-04-11T12:11:23Z) - Unsupervised Image Fusion Method based on Feature Mutual Mapping [16.64607158983448]
We propose an unsupervised adaptive image fusion method to address the above issues.
We construct a global map to measure the connections of pixels between the input source images.
Our method achieves superior performance in both visual perception and objective evaluation.
arXiv Detail & Related papers (2022-01-25T07:50:14Z) - End-to-End Learning for Simultaneously Generating Decision Map and
Multi-Focus Image Fusion Result [7.564462759345851]
The aim of multi-focus image fusion is to gather focused regions of different images to generate a unique all-in-focus fused image.
Most of the existing deep learning structures failed to balance fusion quality and end-to-end implementation convenience.
We propose a cascade network to simultaneously generate decision map and fused result with an end-to-end training procedure.
arXiv Detail & Related papers (2020-10-17T09:09:51Z) - Single Image Brightening via Multi-Scale Exposure Fusion with Hybrid
Learning [48.890709236564945]
A small ISO and a small exposure time are usually used to capture an image in the back or low light conditions.
In this paper, a single image brightening algorithm is introduced to brighten such an image.
The proposed algorithm includes a unique hybrid learning framework to generate two virtual images with large exposure times.
arXiv Detail & Related papers (2020-07-04T08:23:07Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.