FALCON: Frequency Adjoint Link with CONtinuous Density Mask for Fast Single Image Dehazing
- URL: http://arxiv.org/abs/2407.00972v1
- Date: Mon, 1 Jul 2024 05:16:26 GMT
- Title: FALCON: Frequency Adjoint Link with CONtinuous Density Mask for Fast Single Image Dehazing
- Authors: Donghyun Kim, Seil Kang, Seong Jae Hwang,
- Abstract summary: This work introduces FALCON, a single-image dehazing system achieving state-of-the-art performance on both quality and speed.
We leverage the underlying haze distribution based on the atmospheric scattering model via a Continuous Density Mask.
Experiments involving multiple state-of-the-art methods and ablation analysis demonstrate FALCON's exceptional performance in both dehazing quality and speed.
- Score: 8.703680337470285
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image dehazing, addressing atmospheric interference like fog and haze, remains a pervasive challenge crucial for robust vision applications such as surveillance and remote sensing under adverse visibility. While various methodologies have evolved from early works predicting transmission matrix and atmospheric light features to deep learning and dehazing networks, they innately prioritize dehazing quality metrics, neglecting the need for real-time applicability in time-sensitive domains like autonomous driving. This work introduces FALCON (Frequency Adjoint Link with CONtinuous density mask), a single-image dehazing system achieving state-of-the-art performance on both quality and speed. Particularly, we develop a novel bottleneck module, namely, Frequency Adjoint Link, operating in the frequency space to globally expand the receptive field with minimal growth in network size. Further, we leverage the underlying haze distribution based on the atmospheric scattering model via a Continuous Density Mask (CDM) which serves as a continuous-valued mask input prior and a differentiable auxiliary loss. Comprehensive experiments involving multiple state-of-the-art methods and ablation analysis demonstrate FALCON's exceptional performance in both dehazing quality and speed (i.e., >$180 frames-per-second), quantified by metrics such as FPS, PSNR, and SSIM.
Related papers
- SpectralMamba: Efficient Mamba for Hyperspectral Image Classification [39.18999103115206]
Recurrent neural networks and Transformers have dominated most applications in hyperspectral (HS) imaging.
We propose SpectralMamba -- a novel state space model incorporated efficient deep learning framework for HS image classification.
We show that SpectralMamba surprisingly creates promising win-wins from both performance and efficiency perspectives.
arXiv Detail & Related papers (2024-04-12T14:12:03Z) - Flow-Attention-based Spatio-Temporal Aggregation Network for 3D Mask
Detection [12.160085404239446]
We propose a novel 3D mask detection framework called FASTEN.
We tailor the network for focusing more on fine details in large movements, which can eliminate redundant-temporal feature interference.
FASTEN only requires five frames input and outperforms eight competitors for both intra-dataset and cross-dataset evaluations.
arXiv Detail & Related papers (2023-10-25T11:54:21Z) - Theoretical framework for real time sub-micron depth monitoring using
quantum inline coherent imaging [55.2480439325792]
Inline Coherent Imaging (ICI) is a reliable method for real-time monitoring of various laser processes, including keyhole welding, additive manufacturing, and micromachining.
The axial resolution is limited to greater than 2 mum making ICI unsuitable for monitoring submicron processes.
Advancements in Quantum Optical Coherence Tomography (Q OCT) has the potential to address this issue by achieving better than 1 mum depth resolution.
arXiv Detail & Related papers (2023-09-17T17:05:21Z) - DADFNet: Dual Attention and Dual Frequency-Guided Dehazing Network for
Video-Empowered Intelligent Transportation [79.18450119567315]
Adverse weather conditions pose severe challenges for video-based transportation surveillance.
We propose a dual attention and dual frequency-guided dehazing network (termed DADFNet) for real-time visibility enhancement.
arXiv Detail & Related papers (2023-04-19T11:55:30Z) - Masked Frequency Modeling for Self-Supervised Visual Pre-Training [102.89756957704138]
We present Masked Frequency Modeling (MFM), a unified frequency-domain-based approach for self-supervised pre-training of visual models.
MFM first masks out a portion of frequency components of the input image and then predicts the missing frequencies on the frequency spectrum.
For the first time, MFM demonstrates that, for both ViT and CNN, a simple non-Siamese framework can learn meaningful representations even using none of the following: (i) extra data, (ii) extra model, (iii) mask token.
arXiv Detail & Related papers (2022-06-15T17:58:30Z) - FCL-GAN: A Lightweight and Real-Time Baseline for Unsupervised Blind
Image Deblurring [72.43250555622254]
We propose a lightweight and real-time unsupervised BID baseline, termed Frequency-domain Contrastive Loss Constrained Lightweight CycleGAN.
FCL-GAN has attractive properties, i.e., no image domain limitation, no image resolution limitation, 25x lighter than SOTA, and 5x faster than SOTA.
Experiments on several image datasets demonstrate the effectiveness of FCL-GAN in terms of performance, model size and reference time.
arXiv Detail & Related papers (2022-04-16T15:08:03Z) - Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image
Reconstruction [127.20208645280438]
Hyperspectral image (HSI) reconstruction aims to recover the 3D spatial-spectral signal from a 2D measurement.
Modeling the inter-spectra interactions is beneficial for HSI reconstruction.
Mask-guided Spectral-wise Transformer (MST) proposes a novel framework for HSI reconstruction.
arXiv Detail & Related papers (2021-11-15T16:59:48Z) - TFill: Image Completion via a Transformer-Based Architecture [69.62228639870114]
We propose treating image completion as a directionless sequence-to-sequence prediction task.
We employ a restrictive CNN with small and non-overlapping RF for token representation.
In a second phase, to improve appearance consistency between visible and generated regions, a novel attention-aware layer (AAL) is introduced.
arXiv Detail & Related papers (2021-04-02T01:42:01Z) - Efficient Two-Stream Network for Violence Detection Using Separable
Convolutional LSTM [0.0]
We propose an efficient two-stream deep learning architecture leveraging Separable Convolutional LSTM (SepConvLSTM) and pre-trained MobileNet.
SepConvLSTM is constructed by replacing convolution operation at each gate of ConvLSTM with a depthwise separable convolution.
Our model outperforms the accuracy on the larger and more challenging RWF-2000 dataset by more than a 2% margin.
arXiv Detail & Related papers (2021-02-21T12:01:48Z) - Deep Frequent Spatial Temporal Learning for Face Anti-Spoofing [9.435020319411311]
Face anti-spoofing is crucial for the security of face recognition system, by avoiding invaded with presentation attack.
Previous works have shown the effectiveness of using depth and temporal supervision for this task.
We propose a novel two stream FreqSaptialTemporalNet for face anti-spoofing which simultaneously takes advantage of frequent, spatial and temporal information.
arXiv Detail & Related papers (2020-01-20T06:02:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.