PASTA: Towards Flexible and Efficient HDR Imaging Via Progressively Aggregated Spatio-Temporal Alignment
- URL: http://arxiv.org/abs/2403.10376v2
- Date: Tue, 9 Apr 2024 09:52:54 GMT
- Title: PASTA: Towards Flexible and Efficient HDR Imaging Via Progressively Aggregated Spatio-Temporal Alignment
- Authors: Xiaoning Liu, Ao Li, Zongwei Wu, Yapeng Du, Le Zhang, Yulun Zhang, Radu Timofte, Ce Zhu,
- Abstract summary: PASTA is a Progressively Aggregated Spatio-Temporal Alignment framework for HDR deghosting.
Our approach achieves effectiveness and efficiency by harnessing hierarchical representation during feature distanglement.
Experimental results showcase PASTA's superiority over current SOTA methods in both visual quality and performance metrics.
- Score: 91.38256332633544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Leveraging Transformer attention has led to great advancements in HDR deghosting. However, the intricate nature of self-attention introduces practical challenges, as existing state-of-the-art methods often demand high-end GPUs or exhibit slow inference speeds, especially for high-resolution images like 2K. Striking an optimal balance between performance and latency remains a critical concern. In response, this work presents PASTA, a novel Progressively Aggregated Spatio-Temporal Alignment framework for HDR deghosting. Our approach achieves effectiveness and efficiency by harnessing hierarchical representation during feature distanglement. Through the utilization of diverse granularities within the hierarchical structure, our method substantially boosts computational speed and optimizes the HDR imaging workflow. In addition, we explore within-scale feature modeling with local and global attention, gradually merging and refining them in a coarse-to-fine fashion. Experimental results showcase PASTA's superiority over current SOTA methods in both visual quality and performance metrics, accompanied by a substantial 3-fold (x3) increase in inference speed.
Related papers
- FlowIE: Efficient Image Enhancement via Rectified Flow [71.6345505427213]
FlowIE is a flow-based framework that estimates straight-line paths from an elementary distribution to high-quality images.
Our contributions are rigorously validated through comprehensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-06-01T17:29:29Z) - Improving Bracket Image Restoration and Enhancement with Flow-guided Alignment and Enhanced Feature Aggregation [32.69740459810521]
We present the IREANet, which improves the multiple exposure and aggregation with a Flow-guide Feature Alignment Module (FFAM) and an Enhanced Feature Aggregation Module (EFAM)
Our experimental evaluations demonstrate that the proposed IREANet shows state-of-the-art performance compared with previous methods.
arXiv Detail & Related papers (2024-04-16T07:46:55Z) - Efficient Diffusion Model for Image Restoration by Residual Shifting [63.02725947015132]
This study proposes a novel and efficient diffusion model for image restoration.
Our method avoids the need for post-acceleration during inference, thereby avoiding the associated performance deterioration.
Our method achieves superior or comparable performance to current state-of-the-art methods on three classical IR tasks.
arXiv Detail & Related papers (2024-03-12T05:06:07Z) - Gyroscope-Assisted Motion Deblurring Network [11.404195533660717]
This paper presents a framework to synthetic and restore motion blur images using Inertial Measurement Unit (IMU) data.
The framework includes a strategy for training triplet generation, and a Gyroscope-Aided Motion Deblurring (GAMD) network for blurred image restoration.
arXiv Detail & Related papers (2024-02-10T01:30:24Z) - fMPI: Fast Novel View Synthesis in the Wild with Layered Scene
Representations [9.75588035624177]
We propose two novel input processing paradigms for novel view synthesis (NVS) methods.
Our approach identifies and mitigates the two most time-consuming aspects of traditional pipelines.
We demonstrate that our proposed paradigms enable the design of an NVS method that achieves state-of-the-art on public benchmarks.
arXiv Detail & Related papers (2023-12-26T16:24:08Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - RealLiFe: Real-Time Light Field Reconstruction via Hierarchical Sparse
Gradient Descent [23.4659443904092]
EffLiFe is a novel light field optimization method that produces high-quality light fields from sparse view images in real time.
Our method achieves comparable visual quality while being 100x faster on average than state-of-the-art offline methods.
arXiv Detail & Related papers (2023-07-06T14:31:01Z) - Scale-aware Two-stage High Dynamic Range Imaging [13.587403084724015]
We propose a scale-aware two-stage high range imaging framework (ST) to generate high-quality ghost-free image composition.
Specifically, our framework consists of feature alignment and two-stage fusion.
In the first stage of feature fusion, we obtain a preliminary result with little ghost artifacts.
In the second stage, we validate the effectiveness of the proposed ST in terms of speed and quality.
arXiv Detail & Related papers (2023-03-12T05:17:24Z) - PDRF: Progressively Deblurring Radiance Field for Fast and Robust Scene
Reconstruction from Blurry Images [75.87721926918874]
We present Progressively Deblurring Radiance Field (PDRF)
PDRF is a novel approach to efficiently reconstruct high quality radiance fields from blurry images.
We show that PDRF is 15X faster than previous State-of-The-Art scene reconstruction methods.
arXiv Detail & Related papers (2022-08-17T03:42:29Z) - PAN: Towards Fast Action Recognition via Learning Persistence of
Appearance [60.75488333935592]
Most state-of-the-art methods heavily rely on dense optical flow as motion representation.
In this paper, we shed light on fast action recognition by lifting the reliance on optical flow.
We design a novel motion cue called Persistence of Appearance (PA)
In contrast to optical flow, our PA focuses more on distilling the motion information at boundaries.
arXiv Detail & Related papers (2020-08-08T07:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.