Depth-SIMS: Semi-Parametric Image and Depth Synthesis
- URL: http://arxiv.org/abs/2203.03405v1
- Date: Mon, 7 Mar 2022 13:58:32 GMT
- Title: Depth-SIMS: Semi-Parametric Image and Depth Synthesis
- Authors: Valentina Musat, Daniele De Martini, Matthew Gadd and Paul Newman
- Abstract summary: We present a method that generates RGB canvases with well aligned segmentation maps and sparse depth maps, coupled with an in-painting network that transforms the RGB canvases into high quality RGB images.
We benchmark our method in terms of structural alignment and image quality, showing an increase in mIoU over SOTA by 3.7 percentage points and a highly competitive FID.
We analyse the quality of the generated data as training data for semantic segmentation and depth completion, and show that our approach is more suited for this purpose than other methods.
- Score: 23.700034054124604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present a compositing image synthesis method that generates
RGB canvases with well aligned segmentation maps and sparse depth maps, coupled
with an in-painting network that transforms the RGB canvases into high quality
RGB images and the sparse depth maps into pixel-wise dense depth maps. We
benchmark our method in terms of structural alignment and image quality,
showing an increase in mIoU over SOTA by 3.7 percentage points and a highly
competitive FID. Furthermore, we analyse the quality of the generated data as
training data for semantic segmentation and depth completion, and show that our
approach is more suited for this purpose than other methods.
Related papers
- Refinement of Monocular Depth Maps via Multi-View Differentiable Rendering [4.717325308876748]
We present a novel approach to generate view consistent and detailed depth maps from a number of posed images.
We leverage advances in monocular depth estimation, which generate topologically complete, but metrically inaccurate depth maps.
Our method is able to generate dense, detailed, high-quality depth maps, also in challenging indoor scenarios, and outperforms state-of-the-art depth reconstruction approaches.
arXiv Detail & Related papers (2024-10-04T18:50:28Z) - Depth-guided Texture Diffusion for Image Semantic Segmentation [47.46257473475867]
We introduce a Depth-guided Texture Diffusion approach that effectively tackles the outlined challenge.
Our method extracts low-level features from edges and textures to create a texture image.
By integrating this enriched depth map with the original RGB image into a joint feature embedding, our method effectively bridges the disparity between the depth map and the image.
arXiv Detail & Related papers (2024-08-17T04:55:03Z) - Depth-Guided Semi-Supervised Instance Segmentation [62.80063539262021]
Semi-Supervised Instance (SSIS) aims to leverage an amount of unlabeled data during training.
Previous frameworks primarily utilized the RGB information of unlabeled images to generate pseudo-labels.
We introduce a Depth-Guided (DG) framework to overcome this limitation.
arXiv Detail & Related papers (2024-06-25T09:36:50Z) - Symmetric Uncertainty-Aware Feature Transmission for Depth
Super-Resolution [52.582632746409665]
We propose a novel Symmetric Uncertainty-aware Feature Transmission (SUFT) for color-guided DSR.
Our method achieves superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-06-01T06:35:59Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - Beyond Visual Field of View: Perceiving 3D Environment with Echoes and
Vision [51.385731364529306]
This paper focuses on perceiving and navigating 3D environments using echoes and RGB image.
In particular, we perform depth estimation by fusing RGB image with echoes, received from multiple orientations.
We show that the echoes provide holistic and in-expensive information about the 3D structures complementing the RGB image.
arXiv Detail & Related papers (2022-07-03T22:31:47Z) - PDC: Piecewise Depth Completion utilizing Superpixels [0.0]
Current approaches often rely on CNN-based methods with several known drawbacks.
We propose our novel Piecewise Depth Completion (PDC), which works completely without deep learning.
In our evaluation, we can show both the influence of the individual proposed processing steps and the overall performance of our method on the challenging KITTI dataset.
arXiv Detail & Related papers (2021-07-14T13:58:39Z) - Deterministic Guided LiDAR Depth Map Completion [0.0]
This paper presents a non-deep learning-based approach to densify a sparse LiDAR-based depth map using a guidance RGB image.
The evaluation of this work is executed using the KITTI depth completion benchmark, which validates the proposed work.
arXiv Detail & Related papers (2021-06-14T09:19:47Z) - A Method of Generating Measurable Panoramic Image for Indoor Mobile
Measurement System [36.47697710426005]
This paper designs a technique route to generate high-quality panoramic image with depth information.
For the fusion of 3D points and image data, we adopt a parameter self-adaptive framework to produce 2D dense depth map.
For image stitching, optimal seamline for the overlapping area is searched using a graph-cuts-based method.
arXiv Detail & Related papers (2020-10-27T13:12:02Z) - Depth Completion Using a View-constrained Deep Prior [73.21559000917554]
Recent work has shown that the structure of convolutional neural networks (CNNs) induces a strong prior that favors natural images.
This prior, known as a deep image prior (DIP), is an effective regularizer in inverse problems such as image denoising and inpainting.
We extend the concept of the DIP to depth images. Given color images and noisy and incomplete target depth maps, we reconstruct a depth map restored by virtue of using the CNN network structure as a prior.
arXiv Detail & Related papers (2020-01-21T21:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.