Video Demoireing using Focused-Defocused Dual-Camera System
- URL: http://arxiv.org/abs/2508.03449v1
- Date: Tue, 05 Aug 2025 13:49:49 GMT
- Title: Video Demoireing using Focused-Defocused Dual-Camera System
- Authors: Xuan Dong, Xiangyuan Sun, Xia Wang, Jian Song, Ya Li, Weixin Li,
- Abstract summary: Existing demoireing methods rely on single-camera image/video processing.<n>We propose a dual-camera framework that captures synchronized videos of the same scene.<n>We use the defocused video to help distinguish moire patterns from real texture, so as to guide the demoireing of the focused video.
- Score: 21.59133575445115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Moire patterns, unwanted color artifacts in images and videos, arise from the interference between spatially high-frequency scene contents and the spatial discrete sampling of digital cameras. Existing demoireing methods primarily rely on single-camera image/video processing, which faces two critical challenges: 1) distinguishing moire patterns from visually similar real textures, and 2) preserving tonal consistency and temporal coherence while removing moire artifacts. To address these issues, we propose a dual-camera framework that captures synchronized videos of the same scene: one in focus (retaining high-quality textures but may exhibit moire patterns) and one defocused (with significantly reduced moire patterns but blurred textures). We use the defocused video to help distinguish moire patterns from real texture, so as to guide the demoireing of the focused video. We propose a frame-wise demoireing pipeline, which begins with an optical flow based alignment step to address any discrepancies in displacement and occlusion between the focused and defocused frames. Then, we leverage the aligned defocused frame to guide the demoireing of the focused frame using a multi-scale CNN and a multi-dimensional training loss. To maintain tonal and temporal consistency, our final step involves a joint bilateral filter to leverage the demoireing result from the CNN as the guide to filter the input focused frame to obtain the final output. Experimental results demonstrate that our proposed framework largely outperforms state-of-the-art image and video demoireing methods.
Related papers
- StabStitch++: Unsupervised Online Video Stitching with Spatiotemporal Bidirectional Warps [81.8786100662034]
We retarget video stitching to an emerging issue, named warping shake, which unveils the temporal content shakes induced by sequentially unsmooth warps when extending image stitching to video stitching.<n>To address this issue, we propose StabStitch++, a novel video stitching framework to realize spatial stitching and temporal stabilization with unsupervised learning simultaneously.
arXiv Detail & Related papers (2025-05-08T07:12:23Z) - Bokeh Diffusion: Defocus Blur Control in Text-to-Image Diffusion Models [26.79219274697864]
Bokeh Diffusion is a scene-consistent bokeh control framework.<n>We introduce a hybrid training pipeline that aligns in-the-wild images with synthetic blur augmentations.<n>Our approach enables flexible, lens-like blur control, supports downstream applications such as real image editing via inversion.
arXiv Detail & Related papers (2025-03-11T13:49:12Z) - InVi: Object Insertion In Videos Using Off-the-Shelf Diffusion Models [46.587906540660455]
We introduce InVi, an approach for inserting or replacing objects within videos using off-the-shelf, text-to-image latent diffusion models.
InVi achieves realistic object insertion with consistent blending and coherence across frames, outperforming existing methods.
arXiv Detail & Related papers (2024-07-15T17:55:09Z) - DaBiT: Depth and Blur informed Transformer for Video Focal Deblurring [4.332534893042983]
In many real-world scenarios, recorded videos suffer from accidental focus blur.<n>This paper introduces a framework optimized for the as yet unattempted task of video focal deblurring (refocusing)<n>We achieve state-of-the-art results with an average PSNR performance over 1.9dB greater than comparable existing video restoration methods.
arXiv Detail & Related papers (2024-07-01T12:22:16Z) - Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation [93.18163456287164]
This paper proposes a novel text-guided video-to-video translation framework to adapt image models to videos.
Our framework achieves global style and local texture temporal consistency at a low cost.
arXiv Detail & Related papers (2023-06-13T17:52:23Z) - Feature-Aligned Video Raindrop Removal with Temporal Constraints [68.49161092870224]
Raindrop removal is challenging for both single image and video.
Unlike rain streaks, adherent raindrops tend to cover the same area in several frames.
Our method employs a two-stage video-based raindrop removal method.
arXiv Detail & Related papers (2022-05-29T05:42:14Z) - Video Demoireing with Relation-Based Temporal Consistency [68.20281109859998]
Moire patterns, appearing as color distortions, severely degrade image and video qualities when filming a screen with digital cameras.
We study how to remove such undesirable moire patterns in videos, namely video demoireing.
arXiv Detail & Related papers (2022-04-06T17:45:38Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - FineNet: Frame Interpolation and Enhancement for Face Video Deblurring [18.49184807837449]
The aim of this work is to deblur face videos.
We propose a method that tackles this problem from two directions: (1) enhancing the blurry frames, and (2) treating the blurry frames as missing values and estimate them by objective.
Experiments on three real and synthetically generated video datasets show that our method outperforms the previous state-of-the-art methods by a large margin in terms of both quantitative and qualitative results.
arXiv Detail & Related papers (2021-03-01T09:47:16Z) - Self-Adaptively Learning to Demoire from Focused and Defocused Image
Pairs [97.67638106818613]
Moire artifacts are common in digital photography, resulting from the interference between high-frequency scene content and the color filter array of the camera.
Existing deep learning-based demoireing methods trained on large scale iteration are limited in handling various complex moire patterns.
We propose a self-adaptive learning method for demoireing a high-frequency image, with the help of an additional defocused moire-free blur image.
arXiv Detail & Related papers (2020-11-03T23:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.