Driving-Video Dehazing with Non-Aligned Regularization for Safety Assistance
- URL: http://arxiv.org/abs/2405.09996v1
- Date: Thu, 16 May 2024 11:28:01 GMT
- Title: Driving-Video Dehazing with Non-Aligned Regularization for Safety Assistance
- Authors: Junkai Fan, Jiangwei Weng, Kun Wang, Yijun Yang, Jianjun Qian, Jun Li, Jian Yang,
- Abstract summary: Real driving-video dehazing poses a significant challenge due to the inherent difficulty in acquiring precisely aligned/clear video pairs.
We propose a pioneering approach that addresses this challenge through a nonaligned regularization strategy.
Our approach comprises two key components: reference matching and video dehazing.
- Score: 24.671417176179187
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real driving-video dehazing poses a significant challenge due to the inherent difficulty in acquiring precisely aligned hazy/clear video pairs for effective model training, especially in dynamic driving scenarios with unpredictable weather conditions. In this paper, we propose a pioneering approach that addresses this challenge through a nonaligned regularization strategy. Our core concept involves identifying clear frames that closely match hazy frames, serving as references to supervise a video dehazing network. Our approach comprises two key components: reference matching and video dehazing. Firstly, we introduce a non-aligned reference frame matching module, leveraging an adaptive sliding window to match high-quality reference frames from clear videos. Video dehazing incorporates flow-guided cosine attention sampler and deformable cosine attention fusion modules to enhance spatial multiframe alignment and fuse their improved information. To validate our approach, we collect a GoProHazy dataset captured effortlessly with GoPro cameras in diverse rural and urban road environments. Extensive experiments demonstrate the superiority of the proposed method over current state-of-the-art methods in the challenging task of real driving-video dehazing. Project page.
Related papers
- DiVE: DiT-based Video Generation with Enhanced Control [23.63288169762629]
We propose first DiT-based framework specifically designed for generating temporally and multi-view consistent videos.
Specifically, the proposed framework leverages a parameter-free spatial view-inflated attention mechanism to guarantee the cross-view consistency.
arXiv Detail & Related papers (2024-09-03T04:29:59Z) - Zero-Shot Video Editing through Adaptive Sliding Score Distillation [51.57440923362033]
This study proposes a novel paradigm of video-based score distillation, facilitating direct manipulation of original video content.
We propose an Adaptive Sliding Score Distillation strategy, which incorporates both global and local video guidance to reduce the impact of editing errors.
arXiv Detail & Related papers (2024-06-07T12:33:59Z) - TrackDiffusion: Tracklet-Conditioned Video Generation via Diffusion Models [75.20168902300166]
We propose TrackDiffusion, a novel video generation framework affording fine-grained trajectory-conditioned motion control.
A pivotal component of TrackDiffusion is the instance enhancer, which explicitly ensures inter-frame consistency of multiple objects.
generated video sequences by our TrackDiffusion can be used as training data for visual perception models.
arXiv Detail & Related papers (2023-12-01T15:24:38Z) - Control-A-Video: Controllable Text-to-Video Diffusion Models with Motion Prior and Reward Feedback Learning [50.60891619269651]
Control-A-Video is a controllable T2V diffusion model that can generate videos conditioned on text prompts and reference control maps like edge and depth maps.
We propose novel strategies to incorporate content prior and motion prior into the diffusion-based generation process.
Our framework generates higher-quality, more consistent videos compared to existing state-of-the-art methods in controllable text-to-video generation.
arXiv Detail & Related papers (2023-05-23T09:03:19Z) - InstructVid2Vid: Controllable Video Editing with Natural Language Instructions [97.17047888215284]
InstructVid2Vid is an end-to-end diffusion-based methodology for video editing guided by human language instructions.
Our approach empowers video manipulation guided by natural language directives, eliminating the need for per-example fine-tuning or inversion.
arXiv Detail & Related papers (2023-05-21T03:28:13Z) - Video Dehazing via a Multi-Range Temporal Alignment Network with
Physical Prior [117.6741444489174]
Video dehazing aims to recover haze-free frames with high visibility and contrast.
This paper presents a novel framework to explore the physical haze priors and aggregate temporal information.
We construct the first large-scale outdoor video dehazing benchmark dataset.
arXiv Detail & Related papers (2023-03-17T03:44:17Z) - Video Demoireing with Relation-Based Temporal Consistency [68.20281109859998]
Moire patterns, appearing as color distortions, severely degrade image and video qualities when filming a screen with digital cameras.
We study how to remove such undesirable moire patterns in videos, namely video demoireing.
arXiv Detail & Related papers (2022-04-06T17:45:38Z) - Deep Motion Blind Video Stabilization [4.544151613454639]
This work aims to declutter this over-complicated formulation of video stabilization with the help of a novel dataset.
We successfully learn motion blind full-frame video stabilization through employing strictly conventional generative techniques.
Our method achieves $sim3times$ speed-up over the currently available fastest video stabilization methods.
arXiv Detail & Related papers (2020-11-19T07:26:06Z) - Deep Slow Motion Video Reconstruction with Hybrid Imaging System [12.340049542098148]
Current techniques increase the frame rate of standard videos through frame by assuming linear object motion which is not valid in challenging cases.
We propose a two-stage deep learning system consisting of alignment and appearance estimation.
We train our model on synthetically generated hybrid videos and show high-quality results on a variety of test scenes.
arXiv Detail & Related papers (2020-02-27T14:18:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.