TriVol: Point Cloud Rendering via Triple Volumes
- URL: http://arxiv.org/abs/2303.16485v1
- Date: Wed, 29 Mar 2023 06:34:12 GMT
- Title: TriVol: Point Cloud Rendering via Triple Volumes
- Authors: Tao Hu, Xiaogang Xu, Ruihang Chu, Jiaya Jia
- Abstract summary: We present a dense while lightweight 3D representation, named TriVol, that can be combined with NeRF to render photo-realistic images from point clouds.
Our framework has excellent generalization ability to render a category of scenes/objects without fine-tuning.
- Score: 57.305748806545026
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing learning-based methods for point cloud rendering adopt various 3D
representations and feature querying mechanisms to alleviate the sparsity
problem of point clouds. However, artifacts still appear in rendered images,
due to the challenges in extracting continuous and discriminative 3D features
from point clouds. In this paper, we present a dense while lightweight 3D
representation, named TriVol, that can be combined with NeRF to render
photo-realistic images from point clouds. Our TriVol consists of triple slim
volumes, each of which is encoded from the point cloud. TriVol has two
advantages. First, it fuses respective fields at different scales and thus
extracts local and non-local features for discriminative representation.
Second, since the volume size is greatly reduced, our 3D decoder can be
efficiently inferred, allowing us to increase the resolution of the 3D space to
render more point details. Extensive experiments on different benchmarks with
varying kinds of scenes/objects demonstrate our framework's effectiveness
compared with current approaches. Moreover, our framework has excellent
generalization ability to render a category of scenes/objects without
fine-tuning.
Related papers
- Intrinsic Image Decomposition Using Point Cloud Representation [13.771632868567277]
We introduce Point Intrinsic Net (PoInt-Net), which leverages 3D point cloud data to concurrently estimate albedo and shading maps.
PoInt-Net is efficient, achieving consistent performance across point clouds of any size with training only required on small-scale point clouds.
arXiv Detail & Related papers (2023-07-20T14:51:28Z) - Point2Pix: Photo-Realistic Point Cloud Rendering via Neural Radiance
Fields [63.21420081888606]
Recent Radiance Fields and extensions are proposed to synthesize realistic images from 2D input.
We present Point2Pix as a novel point to link the 3D sparse point clouds with 2D dense image pixels.
arXiv Detail & Related papers (2023-03-29T06:26:55Z) - Ponder: Point Cloud Pre-training via Neural Rendering [93.34522605321514]
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural encoders.
The learned point-cloud can be easily integrated into various downstream tasks, including not only high-level rendering tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image rendering.
arXiv Detail & Related papers (2022-12-31T08:58:39Z) - Sparse Fuse Dense: Towards High Quality 3D Detection with Depth
Completion [31.52721107477401]
Current LiDAR-only 3D detection methods inevitably suffer from the sparsity of point clouds.
We present a novel multi-modal framework SFD (Sparse Fuse Dense), which utilizes pseudo point clouds generated from depth completion.
Our method holds the highest entry on the KITTI car 3D object detection leaderboard, demonstrating the effectiveness of our SFD.
arXiv Detail & Related papers (2022-03-18T07:56:35Z) - Voint Cloud: Multi-View Point Cloud Representation for 3D Understanding [80.04281842702294]
We introduce the concept of the multi-view point cloud (Voint cloud) representing each 3D point as a set of features extracted from several view-points.
This novel 3D Voint cloud representation combines the compactness of 3D point cloud representation with the natural view-awareness of multi-view representation.
We deploy a Voint neural network (VointNet) with a theoretically established functional form to learn representations in the Voint space.
arXiv Detail & Related papers (2021-11-30T13:08:19Z) - How Privacy-Preserving are Line Clouds? Recovering Scene Details from 3D
Lines [49.06411148698547]
This paper shows that a significant amount of information about the 3D scene geometry is preserved in line clouds.
Our approach is based on the observation that the closest points between lines can yield a good approximation to the original 3D points.
arXiv Detail & Related papers (2021-03-08T21:32:43Z) - ImVoteNet: Boosting 3D Object Detection in Point Clouds with Image Votes [93.82668222075128]
We propose a 3D detection architecture called ImVoteNet for RGB-D scenes.
ImVoteNet is based on fusing 2D votes in images and 3D votes in point clouds.
We validate our model on the challenging SUN RGB-D dataset.
arXiv Detail & Related papers (2020-01-29T05:09:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.