FPANet: Frequency-based Video Demoireing using Frame-level Post
Alignment
- URL: http://arxiv.org/abs/2301.07330v2
- Date: Mon, 19 Jun 2023 16:10:19 GMT
- Title: FPANet: Frequency-based Video Demoireing using Frame-level Post
Alignment
- Authors: Gyeongrok Oh, Heon Gu, Jinkyu Kim, Sangpil Kim
- Abstract summary: We propose a novel model called FPANet that learns filters in both frequency and spatial domains.
We demonstrate the effectiveness of our proposed method with a publicly available large-scale dataset.
- Score: 6.507353572917133
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Interference between overlapping gird patterns creates moire patterns,
degrading the visual quality of an image that captures a screen of a digital
display device by an ordinary digital camera. Removing such moire patterns is
challenging due to their complex patterns of diverse sizes and color
distortions. Existing approaches mainly focus on filtering out in the spatial
domain, failing to remove a large-scale moire pattern. In this paper, we
propose a novel model called FPANet that learns filters in both frequency and
spatial domains, improving the restoration quality by removing various sizes of
moire patterns. To further enhance, our model takes multiple consecutive
frames, learning to extract frame-invariant content features and outputting
better quality temporally consistent images. We demonstrate the effectiveness
of our proposed method with a publicly available large-scale dataset, observing
that ours outperforms the state-of-the-art approaches, including ESDNet,
VDmoire, MBCNN, WDNet, UNet, and DMCNN, in terms of the image and video quality
metrics, such as PSNR, SSIM, LPIPS, FVD, and FSIM.
Related papers
- A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding [76.44979557843367]
We propose a novel multi-view stereo (MVS) framework that gets rid of the depth range prior.
We introduce a Multi-view Disparity Attention (MDA) module to aggregate long-range context information.
We explicitly estimate the quality of the current pixel corresponding to sampled points on the epipolar line of the source image.
arXiv Detail & Related papers (2024-11-04T08:50:16Z) - Pixel-Aligned Multi-View Generation with Depth Guided Decoder [86.1813201212539]
We propose a novel method for pixel-level image-to-multi-view generation.
Unlike prior work, we incorporate attention layers across multi-view images in the VAE decoder of a latent video diffusion model.
Our model enables better pixel alignment across multi-view images.
arXiv Detail & Related papers (2024-08-26T04:56:41Z) - MultiDiff: Consistent Novel View Synthesis from a Single Image [60.04215655745264]
MultiDiff is a novel approach for consistent novel view synthesis of scenes from a single RGB image.
Our results demonstrate that MultiDiff outperforms state-of-the-art methods on the challenging, real-world datasets RealEstate10K and ScanNet.
arXiv Detail & Related papers (2024-06-26T17:53:51Z) - ShapeMoiré: Channel-Wise Shape-Guided Network for Image Demoiréing [19.56605254816149]
Photographing optoelectronic displays often introduces unwanted moir'e patterns due to analog signal interference.
This work identifies two problems that are largely ignored by existing image demoir'eing approaches.
We propose a ShapeMoir'e method to aid in image demoir'eing.
arXiv Detail & Related papers (2024-04-28T12:12:08Z) - VideoMV: Consistent Multi-View Generation Based on Large Video Generative Model [34.35449902855767]
Two fundamental questions are what data we use for training and how to ensure multi-view consistency.
We propose a dense consistent multi-view generation model that is fine-tuned from off-the-shelf video generative models.
Our approach can generate 24 dense views and converges much faster in training than state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T17:48:15Z) - AADNet: Attention aware Demoiréing Network [2.1626093085892144]
Moire pattern frequently appears in photographs captured with mobile devices and digital cameras.
We propose a novel lightweight architecture, AADNet, for high-resolution image demoire'ing.
arXiv Detail & Related papers (2024-03-13T09:48:11Z) - DeepMultiCap: Performance Capture of Multiple Characters Using Sparse
Multiview Cameras [63.186486240525554]
DeepMultiCap is a novel method for multi-person performance capture using sparse multi-view cameras.
Our method can capture time varying surface details without the need of using pre-scanned template models.
arXiv Detail & Related papers (2021-05-01T14:32:13Z) - Learning Joint Spatial-Temporal Transformations for Video Inpainting [58.939131620135235]
We propose to learn a joint Spatial-Temporal Transformer Network (STTN) for video inpainting.
We simultaneously fill missing regions in all input frames by self-attention, and propose to optimize STTN by a spatial-temporal adversarial loss.
arXiv Detail & Related papers (2020-07-20T16:35:48Z) - Wavelet-Based Dual-Branch Network for Image Demoireing [148.91145614517015]
We design a wavelet-based dual-branch network (WDNet) with a spatial attention mechanism for image demoireing.
Our network removes moire patterns in the wavelet domain to separate the frequencies of moire patterns from the image content.
Experiments demonstrate the effectiveness of our method, and we further show that WDNet generalizes to removing moire artifacts on non-screen images.
arXiv Detail & Related papers (2020-07-14T16:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.