Colored Point Cloud to Image Alignment
- URL: http://arxiv.org/abs/2110.03249v1
- Date: Thu, 7 Oct 2021 08:12:56 GMT
- Title: Colored Point Cloud to Image Alignment
- Authors: Noam Rotstein, Amit Bracha, Ron Kimmel
- Abstract summary: We introduce a differential optimization method that aligns a colored point cloud to a given color image via iterative geometric and color matching.
We find the transformation between the camera image and the point cloud colors by iterating between matching the relative location of the point cloud and matching colors.
- Score: 15.828285556159026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recognition and segmentation of objects in images enjoy the wealth of large
volume of well annotated data. At the other end, when dealing with the
reconstruction of geometric structures of objects from images, there is a
limited amount of accurate data available for supervised learning. One type of
such geometric data with insufficient amount required for deep learning is real
world accurate RGB-D images. The lack of accurate RGB-D datasets is one of the
obstacles in the evolution of geometric scene reconstructions from images. One
solution to creating such a dataset is to capture RGB images while
simultaneously using an accurate depth scanning device that assigns a depth
value to each pixel. A major challenge in acquiring such ground truth data is
the accurate alignment between the RGB images and the measured depth and color
profiles. We introduce a differential optimization method that aligns a colored
point cloud to a given color image via iterative geometric and color matching.
The proposed method enables the construction of RGB-D datasets for specific
camera systems. In the suggested framework, the optimization minimizes the
difference between the colors of the image pixels and the corresponding colors
of the projected points to the camera plane. We assume that the colors produced
by the geometric scanner camera and the color camera sensor are different and
thus are characterized by different chromatic acquisition properties. We align
the different color spaces while compensating for their corresponding color
appearance. Under this setup, we find the transformation between the camera
image and the point cloud colors by iterating between matching the relative
location of the point cloud and matching colors. The successful alignments
produced by the proposed method are demonstrated on both synthetic data with
quantitative evaluation and real world scenes with qualitative results.
Related papers
- Clothes Grasping and Unfolding Based on RGB-D Semantic Segmentation [21.950751953721817]
We propose a novel Bi-directional Fractal Cross Fusion Network (BiFCNet) for semantic segmentation.
We use RGB images with rich color features as input to our network in which the Fractal Cross Fusion module fuses RGB and depth data.
To reduce the cost of real data collection, we propose a data augmentation method based on an adversarial strategy.
arXiv Detail & Related papers (2023-05-05T03:21:55Z) - Spherical Space Feature Decomposition for Guided Depth Map
Super-Resolution [123.04455334124188]
Guided depth map super-resolution (GDSR) aims to upsample low-resolution (LR) depth maps with additional information involved in high-resolution (HR) RGB images from the same scene.
In this paper, we propose the Spherical Space feature Decomposition Network (SSDNet) to solve the above issues.
Our method can achieve state-of-the-art results on four test datasets, as well as successfully generalize to real-world scenes.
arXiv Detail & Related papers (2023-03-15T21:22:21Z) - 4D LUT: Learnable Context-Aware 4D Lookup Table for Image Enhancement [50.49396123016185]
We propose a novel learnable context-aware 4-dimensional lookup table (4D LUT)
It achieves content-dependent enhancement of different contents in each image via adaptively learning of photo context.
Compared with traditional 3D LUT, i.e., RGB mapping to RGB, 4D LUT enables finer control of color transformations for pixels with different content in each image.
arXiv Detail & Related papers (2022-09-05T04:00:57Z) - Scale Invariant Semantic Segmentation with RGB-D Fusion [12.650574326251023]
We propose a neural network architecture for scale-invariant semantic segmentation using RGB-D images.
We incorporate depth information to the RGB data for pixel-wise semantic segmentation to address the different scale objects in an outdoor scene.
Our model is compact and can be easily applied to the other RGB model.
arXiv Detail & Related papers (2022-04-10T12:54:27Z) - Influence of Color Spaces for Deep Learning Image Colorization [2.3705923859070217]
Existing colorization methods rely on different color spaces: RGB, YUV, Lab, etc.
In this chapter, we aim to study their influence on the results obtained by training a deep neural network.
We compare the results obtained with the same deep neural network architecture with RGB, YUV and Lab color spaces.
arXiv Detail & Related papers (2022-04-06T14:14:07Z) - Transform your Smartphone into a DSLR Camera: Learning the ISP in the
Wild [159.71025525493354]
We propose a trainable Image Signal Processing framework that produces DSLR quality images given RAW images captured by a smartphone.
To address the color misalignments between training image pairs, we employ a color-conditional ISP network and optimize a novel parametric color mapping between each input RAW and reference DSLR image.
arXiv Detail & Related papers (2022-03-20T20:13:59Z) - Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB Images [89.81919625224103]
Training deep models for RGB-D salient object detection (SOD) often requires a large number of labeled RGB-D images.
We present a Dual-Semi RGB-D Salient Object Detection Network (DS-Net) to leverage unlabeled RGB images for boosting RGB-D saliency detection.
arXiv Detail & Related papers (2022-01-01T03:02:27Z) - RGB-D Image Inpainting Using Generative Adversarial Network with a Late
Fusion Approach [14.06830052027649]
Diminished reality is a technology that aims to remove objects from video images and fills in the missing region with plausible pixels.
We propose an RGB-D image inpainting method using generative adversarial network.
arXiv Detail & Related papers (2021-10-14T14:44:01Z) - Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision [76.41657124981549]
This paper presents a joint learning model for image alignment and RAW-to-sRGB mapping.
Experiments show that our method performs favorably against state-of-the-arts on ZRR and SR-RAW datasets.
arXiv Detail & Related papers (2021-08-18T12:41:36Z) - PDC: Piecewise Depth Completion utilizing Superpixels [0.0]
Current approaches often rely on CNN-based methods with several known drawbacks.
We propose our novel Piecewise Depth Completion (PDC), which works completely without deep learning.
In our evaluation, we can show both the influence of the individual proposed processing steps and the overall performance of our method on the challenging KITTI dataset.
arXiv Detail & Related papers (2021-07-14T13:58:39Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.