A feature-supervised generative adversarial network for environmental
monitoring during hazy days
- URL: http://arxiv.org/abs/2008.01942v1
- Date: Wed, 5 Aug 2020 05:27:15 GMT
- Title: A feature-supervised generative adversarial network for environmental
monitoring during hazy days
- Authors: Ke Wang, Siyuan Zhang, Junlan Chen, Fan Ren, Lei Xiao
- Abstract summary: This paper proposes a feature-supervised learning network based on generative adversarial networks (GAN) for environmental monitoring during days.
The proposed method has achieved better performance than current state-of-the-art methods on both synthetic datasets and real-world remote sensing images.
- Score: 6.276954295407201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adverse haze weather condition has brought considerable difficulties in
vision-based environmental applications. While, until now, most of the existing
environmental monitoring studies are under ordinary conditions, and the studies
of complex haze weather conditions have been ignored. Thence, this paper
proposes a feature-supervised learning network based on generative adversarial
networks (GAN) for environmental monitoring during hazy days. Its main idea is
to train the model under the supervision of feature maps from the ground truth.
Four key technical contributions are made in the paper. First, pairs of hazy
and clean images are used as inputs to supervise the encoding process and
obtain high-quality feature maps. Second, the basic GAN formulation is modified
by introducing perception loss, style loss, and feature regularization loss to
generate better results. Third, multi-scale images are applied as the input to
enhance the performance of discriminator. Finally, a hazy remote sensing
dataset is created for testing our dehazing method and environmental detection.
Extensive experimental results show that the proposed method has achieved
better performance than current state-of-the-art methods on both synthetic
datasets and real-world remote sensing images.
Related papers
- D-YOLO a robust framework for object detection in adverse weather conditions [0.0]
Adverse weather conditions including haze, snow and rain lead to decline in image qualities, which often causes a decline in performance for deep-learning based detection networks.
To better integrate image restoration and object detection tasks, we designed a double-route network with an attention feature fusion module.
We also proposed a subnetwork to provide haze-free features to the detection network. Specifically, our D-YOLO improves the performance of the detection network by minimizing the distance between the clear feature extraction subnetwork and detection network.
arXiv Detail & Related papers (2024-03-14T09:57:15Z) - Exploring the Application of Large-scale Pre-trained Models on Adverse
Weather Removal [97.53040662243768]
We propose a CLIP embedding module to make the network handle different weather conditions adaptively.
This module integrates the sample specific weather prior extracted by CLIP image encoder together with the distribution specific information learned by a set of parameters.
arXiv Detail & Related papers (2023-06-15T10:06:13Z) - Object recognition in atmospheric turbulence scenes [2.657505380055164]
We propose a novel framework that learns distorted features to detect and classify object types in turbulent environments.
Specifically, we utilise deformable convolutions to handle spatial displacement.
We show that the proposed framework outperforms the benchmark with a mean Average Precision (mAP) score exceeding 30%.
arXiv Detail & Related papers (2022-10-25T20:21:25Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - Unsupervised Restoration of Weather-affected Images using Deep Gaussian
Process-based CycleGAN [92.15895515035795]
We describe an approach for supervising deep networks that are based on CycleGAN.
We introduce new losses for training CycleGAN that lead to more effective training, resulting in high-quality reconstructions.
We demonstrate that the proposed method can be effectively applied to different restoration tasks like de-raining, de-hazing and de-snowing.
arXiv Detail & Related papers (2022-04-23T01:30:47Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Underwater Object Classification and Detection: first results and open
challenges [1.1549572298362782]
This work reviews the problem of object detection in underwater environments.
We analyse and quantify the shortcomings of conventional state-of-the-art (SOTA) algorithms.
arXiv Detail & Related papers (2022-01-04T04:54:08Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.