Estimating Fog Parameters from a Sequence of Stereo Images
- URL: http://arxiv.org/abs/2511.20865v1
- Date: Tue, 25 Nov 2025 21:25:41 GMT
- Title: Estimating Fog Parameters from a Sequence of Stereo Images
- Authors: Yining Ding, João F. C. Mota, Andrew M. Wallace, Sen Wang,
- Abstract summary: We propose a method which, given a sequence of stereo foggy images, estimates the parameters of a fog model and updates them dynamically.<n>In contrast with previous approaches, which estimate the parameters sequentially and thus are prone to error propagation, our algorithm estimates all the parameters simultaneously by solving a novel optimisation problem.<n>The proposed algorithm can be easily used as an add-on module in existing visual Simultaneous Localisation and Mapping (SLAM) or odometry systems in the presence of fog.
- Score: 8.583016330401971
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method which, given a sequence of stereo foggy images, estimates the parameters of a fog model and updates them dynamically. In contrast with previous approaches, which estimate the parameters sequentially and thus are prone to error propagation, our algorithm estimates all the parameters simultaneously by solving a novel optimisation problem. By assuming that fog is only locally homogeneous, our method effectively handles real-world fog, which is often globally inhomogeneous. The proposed algorithm can be easily used as an add-on module in existing visual Simultaneous Localisation and Mapping (SLAM) or odometry systems in the presence of fog. In order to assess our method, we also created a new dataset, the Stereo Driving In Real Fog (SDIRF), consisting of high-quality, consecutive stereo frames of real, foggy road scenes under a variety of visibility conditions, totalling over 40 minutes and 34k frames. As a first-of-its-kind, SDIRF contains the camera's photometric parameters calibrated in a lab environment, which is a prerequisite for correctly applying the atmospheric scattering model to foggy images. The dataset also includes the counterpart clear data of the same routes recorded in overcast weather, which is useful for companion work in image defogging and depth reconstruction. We conducted extensive experiments using both synthetic foggy data and real foggy sequences from SDIRF to demonstrate the superiority of the proposed algorithm over prior methods. Our method not only produces the most accurate estimates on synthetic data, but also adapts better to real fog. We make our code and SDIRF publicly available\footnote{https://github.com/SenseRoboticsLab/estimating-fog-parameters} to the community with the aim of advancing the research on visual perception in fog.
Related papers
- SF3D-RGB: Scene Flow Estimation from Monocular Camera and Sparse LiDAR [17.224692757126153]
We present a deep learning architecture for sparse scene flow estimation using 2D monocular images and 3D point clouds.<n>Our architecture is an end-to-end model that first encodes information from each modality into features and fuses them together.<n>Experiments show that our proposed method outperforms single-modality methods and achieves better scene flow accuracy on real-world datasets.
arXiv Detail & Related papers (2026-02-25T09:03:42Z) - Depth Completion as Parameter-Efficient Test-Time Adaptation [66.72360181325877]
CAPA is a parameter-efficient test-time optimization framework that adapts pre-trained 3D foundation models (FMs) for depth completion.<n>For videos, CAPA introduces sequence-level parameter sharing, jointly adapting all frames to exploit temporal correlations, improve robustness, and enforce multi-frame consistency.
arXiv Detail & Related papers (2026-02-16T13:53:23Z) - Bridging Clear and Adverse Driving Conditions [0.0]
Domain Adaptation pipeline transforms clear-weather images into fog, rain, snow, and nighttime images.<n>We leverage an existing DA GAN, extend it to support auxiliary inputs, and develop a novel training recipe that leverages both simulated and real images.
arXiv Detail & Related papers (2025-08-19T07:58:05Z) - DehazeGS: Seeing Through Fog with 3D Gaussian Splatting [17.119969983512533]
We introduce DehazeGS, a method capable of decomposing and rendering a fog-free background from participating media.<n>Experiments on both synthetic and real-world foggy datasets demonstrate that DehazeGS achieves state-of-the-art performance.
arXiv Detail & Related papers (2025-01-07T09:47:46Z) - SynFog: A Photo-realistic Synthetic Fog Dataset based on End-to-end Imaging Simulation for Advancing Real-World Defogging in Autonomous Driving [48.27575423606407]
We introduce an end-to-end simulation pipeline designed to generate photo-realistic foggy images.
We present a new synthetic fog dataset named SynFog, which features both sky light and active lighting conditions.
Experimental results demonstrate that models trained on SynFog exhibit superior performance in visual perception and detection accuracy.
arXiv Detail & Related papers (2024-03-25T18:32:41Z) - FogGuard: guarding YOLO against fog using perceptual loss [5.868532677577194]
FogGuard is a fog-aware object detection network designed to address the challenges posed by foggy weather conditions.
FogGuard compensates for foggy conditions in the scene by incorporating YOLOv3 as the baseline algorithm.
Our network significantly improves performance, achieving a 69.43% mAP compared to YOLOv3's 57.78% on the RTTS dataset.
arXiv Detail & Related papers (2024-03-13T20:13:25Z) - Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in
Adverse Weather [92.84066576636914]
This work addresses the challenging task of LiDAR-based 3D object detection in foggy weather.
We tackle this problem by simulating physically accurate fog into clear-weather scenes.
We are the first to provide strong 3D object detection baselines on the Seeing Through Fog dataset.
arXiv Detail & Related papers (2021-08-11T14:37:54Z) - 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment
Feedback Loop [128.07841893637337]
Regression-based methods have recently shown promising results in reconstructing human meshes from monocular images.
Minor deviation in parameters may lead to noticeable misalignment between the estimated meshes and image evidences.
We propose a Pyramidal Mesh Alignment Feedback (PyMAF) loop to leverage a feature pyramid and rectify the predicted parameters.
arXiv Detail & Related papers (2021-03-30T17:07:49Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Single Image Brightening via Multi-Scale Exposure Fusion with Hybrid
Learning [48.890709236564945]
A small ISO and a small exposure time are usually used to capture an image in the back or low light conditions.
In this paper, a single image brightening algorithm is introduced to brighten such an image.
The proposed algorithm includes a unique hybrid learning framework to generate two virtual images with large exposure times.
arXiv Detail & Related papers (2020-07-04T08:23:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.