RoSe: Robust Self-supervised Stereo Matching under Adverse Weather Conditions
- URL: http://arxiv.org/abs/2509.19165v1
- Date: Tue, 23 Sep 2025 15:41:40 GMT
- Title: RoSe: Robust Self-supervised Stereo Matching under Adverse Weather Conditions
- Authors: Yun Wang, Junjie Hu, Junhui Hou, Chenghao Zhang, Renwei Yang, Dapeng Oliver Wu,
- Abstract summary: We propose a robust self-supervised training paradigm, consisting of two key steps: robust self-supervised scene correspondence learning and adverse weather distillation.<n>Experiments demonstrate the effectiveness and versatility of our proposed solution, which outperforms existing state-of-the-art self-supervised methods.
- Score: 58.37558408672509
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent self-supervised stereo matching methods have made significant progress, but their performance significantly degrades under adverse weather conditions such as night, rain, and fog. We identify two primary weaknesses contributing to this performance degradation. First, adverse weather introduces noise and reduces visibility, making CNN-based feature extractors struggle with degraded regions like reflective and textureless areas. Second, these degraded regions can disrupt accurate pixel correspondences, leading to ineffective supervision based on the photometric consistency assumption. To address these challenges, we propose injecting robust priors derived from the visual foundation model into the CNN-based feature extractor to improve feature representation under adverse weather conditions. We then introduce scene correspondence priors to construct robust supervisory signals rather than relying solely on the photometric consistency assumption. Specifically, we create synthetic stereo datasets with realistic weather degradations. These datasets feature clear and adverse image pairs that maintain the same semantic context and disparity, preserving the scene correspondence property. With this knowledge, we propose a robust self-supervised training paradigm, consisting of two key steps: robust self-supervised scene correspondence learning and adverse weather distillation. Both steps aim to align underlying scene results from clean and adverse image pairs, thus improving model disparity estimation under adverse weather effects. Extensive experiments demonstrate the effectiveness and versatility of our proposed solution, which outperforms existing state-of-the-art self-supervised methods. Codes are available at \textcolor{blue}{https://github.com/cocowy1/RoSe-Robust-Self-supervised-Stereo-Matching-under-Adverse-Weather-Conditi ons}.
Related papers
- WeatherDiffusion: Weather-Guided Diffusion Model for Forward and Inverse Rendering [40.94600501568197]
WeatherDiffusion is a diffusion-based framework for forward and inverse rendering on autonomous driving scenes.<n>Our method enables authentic estimation of material properties, scene geometry, and lighting, and further supports controllable weather and illumination editing.
arXiv Detail & Related papers (2025-08-09T13:29:39Z) - RobuSTereo: Robust Zero-Shot Stereo Matching under Adverse Weather [9.627322054208868]
Learning-based stereo matching models struggle in adverse weather conditions due to the scarcity of corresponding training data.<n>We propose textbfRobuSTereo, a novel framework that enhances the zero-shot generalization of stereo matching models under adverse weather.
arXiv Detail & Related papers (2025-07-02T12:27:53Z) - Pseudo-Label Guided Real-World Image De-weathering: A Learning Framework with Imperfect Supervision [57.5699142476311]
We propose a unified solution for real-world image de-weathering with non-ideal supervision.<n>Our method exhibits significant advantages when trained on imperfectly aligned de-weathering datasets.
arXiv Detail & Related papers (2025-04-14T07:24:03Z) - WeatherProof: A Paired-Dataset Approach to Semantic Segmentation in
Adverse Weather [9.619700283574533]
We introduce a general paired-training method that leads to improved performance on images in adverse weather conditions.
We create the first semantic segmentation dataset with accurate clear and adverse weather image pairs.
We find that training on these paired clear and adverse weather frames which share an underlying scene results in improved performance on adverse weather data.
arXiv Detail & Related papers (2023-12-15T04:57:54Z) - Learning Real-World Image De-Weathering with Imperfect Supervision [57.748585821252824]
Existing real-world de-weathering datasets often exhibit inconsistent illumination, position, and textures between the ground-truth images and the input degraded images.
We develop a Consistent Label Constructor (CLC) to generate a pseudo-label as consistent as possible with the input degraded image.
We combine the original imperfect labels and pseudo-labels to jointly supervise the de-weathering model by the proposed Information Allocation Strategy.
arXiv Detail & Related papers (2023-10-23T14:02:57Z) - DA-RAW: Domain Adaptive Object Detection for Real-World Adverse Weather Conditions [2.048226951354646]
We present an unsupervised domain adaptation framework for object detection in adverse weather conditions.
Our method resolves the style gap by concentrating on style-related information of high-level features.
Using self-supervised contrastive learning, our framework then reduces the weather gap and acquires instance features that are robust to weather corruption.
arXiv Detail & Related papers (2023-09-15T04:37:28Z) - Robust Monocular Depth Estimation under Challenging Conditions [81.57697198031975]
State-of-the-art monocular depth estimation approaches are highly unreliable under challenging illumination and weather conditions.
We tackle these safety-critical issues with md4all: a simple and effective solution that works reliably under both adverse and ideal conditions.
arXiv Detail & Related papers (2023-08-18T17:59:01Z) - Rethinking Real-world Image Deraining via An Unpaired Degradation-Conditioned Diffusion Model [51.49854435403139]
We propose RainDiff, the first real-world image deraining paradigm based on diffusion models.
We introduce a stable and non-adversarial unpaired cycle-consistent architecture that can be trained, end-to-end, with only unpaired data for supervision.
We also propose a degradation-conditioned diffusion model that refines the desired output via a diffusive generative process conditioned by learned priors of multiple rain degradations.
arXiv Detail & Related papers (2023-01-23T13:34:01Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Robustness of Object Detectors in Degrading Weather Conditions [7.91378990016322]
State-of-the-art object detection systems for autonomous driving achieve promising results in clear weather conditions.
These systems need to work in degrading weather conditions, such as rain, fog and snow.
Most approaches evaluate only on the KITTI dataset, which consists only of clear weather scenes.
arXiv Detail & Related papers (2021-06-16T13:56:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.