Towards urban scenes understanding through polarization cues
- URL: http://arxiv.org/abs/2106.01717v1
- Date: Thu, 3 Jun 2021 09:40:08 GMT
- Title: Towards urban scenes understanding through polarization cues
- Authors: Marc Blanchon, D\'esir\'e Sidib\'e, Olivier Morel, Ralph Seulin,
Fabrice Meriaudeau
- Abstract summary: We propose a two-axis pipeline based on polarization indices to analyze dynamic urban scenes.
In addition to the conventional photometric characteristics, we propose to include polarization sensing.
- Score: 1.1339580074756188
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous robotics is critically affected by the robustness of its scene
understanding algorithms. We propose a two-axis pipeline based on polarization
indices to analyze dynamic urban scenes. As robots evolve in unknown
environments, they are prone to encountering specular obstacles. Usually,
specular phenomena are rarely taken into account by algorithms which causes
misinterpretations and erroneous estimates. By exploiting all the light
properties, systems can greatly increase their robustness to events. In
addition to the conventional photometric characteristics, we propose to include
polarization sensing.
We demonstrate in this paper that the contribution of polarization
measurement increases both the performances of segmentation and the quality of
depth estimation. Our polarimetry-based approaches are compared here with other
state-of-the-art RGB-centric methods showing interest of using polarization
imaging.
Related papers
- Adaptive Stereo Depth Estimation with Multi-Spectral Images Across All Lighting Conditions [58.88917836512819]
We propose a novel framework incorporating stereo depth estimation to enforce accurate geometric constraints.
To mitigate the effects of poor lighting on stereo matching, we introduce Degradation Masking.
Our method achieves state-of-the-art (SOTA) performance on the Multi-Spectral Stereo (MS2) dataset.
arXiv Detail & Related papers (2024-11-06T03:30:46Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - NeISF: Neural Incident Stokes Field for Geometry and Material Estimation [50.588983686271284]
Multi-view inverse rendering is the problem of estimating the scene parameters such as shapes, materials, or illuminations from a sequence of images captured under different viewpoints.
We propose Neural Incident Stokes Fields (NeISF), a multi-view inverse framework that reduces ambiguities using polarization cues.
arXiv Detail & Related papers (2023-11-22T06:28:30Z) - PARTNER: Level up the Polar Representation for LiDAR 3D Object Detection [81.16859686137435]
We present PARTNER, a novel 3D object detector in the polar coordinate.
Our method outperforms the previous polar-based works with remarkable margins of 3.68% and 9.15% on and ONCE validation set.
arXiv Detail & Related papers (2023-08-08T01:59:20Z) - Polarimetric Imaging for Perception [3.093890460224435]
We analyze the potential for improvement in perception tasks when using an RGB-polarimetric camera.
We show that a quantifiable improvement can be achieved for both of them using state-of-the-art deep neural networks.
arXiv Detail & Related papers (2023-05-24T06:42:27Z) - Physically-admissible polarimetric data augmentation for road-scene
analysis [4.972086627584208]
We propose CycleGAN to transfer large labeled road scene datasets to the polarimetric domain.
The resulting constrained CycleGAN is publicly released, allowing anyone to generate their own polarimetric images.
arXiv Detail & Related papers (2022-06-15T10:04:43Z) - Degradation-agnostic Correspondence from Resolution-asymmetric Stereo [96.03964515969652]
We study the problem of stereo matching from a pair of images with different resolutions, e.g., those acquired with a tele-wide camera system.
We propose to impose the consistency between two views in a feature space instead of the image space, named feature-metric consistency.
We find that, although a stereo matching network trained with the photometric loss is not optimal, its feature extractor can produce degradation-agnostic and matching-specific features.
arXiv Detail & Related papers (2022-04-04T12:24:34Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Polarimetric Monocular Dense Mapping Using Relative Deep Depth Prior [8.552832023331248]
We propose an online reconstruction method that uses full polarimetric cues available from the polarization camera.
Our method is able to significantly improve the accuracy of the depthmap as well as increase its density, specially in regions of poor texture.
arXiv Detail & Related papers (2021-02-10T01:34:37Z) - P2D: a self-supervised method for depth estimation from polarimetry [0.7046417074932255]
We propose exploiting polarization cues to encourage accurate reconstruction of scenes.
Our method is evaluated both qualitatively and quantitatively demonstrating that the contribution of this new information as well as an enhanced loss function improves depth estimation results.
arXiv Detail & Related papers (2020-07-15T09:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.