How Privacy-Preserving are Line Clouds? Recovering Scene Details from 3D
Lines
- URL: http://arxiv.org/abs/2103.05086v1
- Date: Mon, 8 Mar 2021 21:32:43 GMT
- Title: How Privacy-Preserving are Line Clouds? Recovering Scene Details from 3D
Lines
- Authors: Kunal Chelani and Fredrik Kahl and Torsten Sattler
- Abstract summary: This paper shows that a significant amount of information about the 3D scene geometry is preserved in line clouds.
Our approach is based on the observation that the closest points between lines can yield a good approximation to the original 3D points.
- Score: 49.06411148698547
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Visual localization is the problem of estimating the camera pose of a given
image with respect to a known scene. Visual localization algorithms are a
fundamental building block in advanced computer vision applications, including
Mixed and Virtual Reality systems. Many algorithms used in practice represent
the scene through a Structure-from-Motion (SfM) point cloud and use 2D-3D
matches between a query image and the 3D points for camera pose estimation. As
recently shown, image details can be accurately recovered from SfM point clouds
by translating renderings of the sparse point clouds to images. To address the
resulting potential privacy risks for user-generated content, it was recently
proposed to lift point clouds to line clouds by replacing 3D points by randomly
oriented 3D lines passing through these points. The resulting representation is
unintelligible to humans and effectively prevents point cloud-to-image
translation. This paper shows that a significant amount of information about
the 3D scene geometry is preserved in these line clouds, allowing us to
(approximately) recover the 3D point positions and thus to (approximately)
recover image content. Our approach is based on the observation that the
closest points between lines can yield a good approximation to the original 3D
points. Code is available at https://github.com/kunalchelani/Line2Point.
Related papers
- PointRecon: Online Point-based 3D Reconstruction via Ray-based 2D-3D Matching [10.5792547614413]
We propose a novel online, point-based 3D reconstruction method from posed monocular RGB videos.
Our model maintains a global point cloud representation of the scene, continuously updating the features and 3D locations of points as new images are observed.
Experiments on the ScanNet dataset show that our method achieves state-of-the-art quality among online MVS approaches.
arXiv Detail & Related papers (2024-10-30T17:29:25Z) - LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes [52.31402192831474]
Existing 3D scene generation models, however, limit the target scene to specific domain.
We propose LucidDreamer, a domain-free scene generation pipeline.
LucidDreamer produces highly-detailed Gaussian splats with no constraint on domain of the target scene.
arXiv Detail & Related papers (2023-11-22T13:27:34Z) - EP2P-Loc: End-to-End 3D Point to 2D Pixel Localization for Large-Scale
Visual Localization [44.05930316729542]
We propose EP2P-Loc, a novel large-scale visual localization method for 3D point clouds.
To increase the number of inliers, we propose a simple algorithm to remove invisible 3D points in the image.
For the first time in this task, we employ a differentiable for end-to-end training.
arXiv Detail & Related papers (2023-09-14T07:06:36Z) - TriVol: Point Cloud Rendering via Triple Volumes [57.305748806545026]
We present a dense while lightweight 3D representation, named TriVol, that can be combined with NeRF to render photo-realistic images from point clouds.
Our framework has excellent generalization ability to render a category of scenes/objects without fine-tuning.
arXiv Detail & Related papers (2023-03-29T06:34:12Z) - Leveraging Single-View Images for Unsupervised 3D Point Cloud Completion [53.93172686610741]
Cross-PCC is an unsupervised point cloud completion method without requiring any 3D complete point clouds.
To take advantage of the complementary information from 2D images, we use a single-view RGB image to extract 2D features.
Our method even achieves comparable performance to some supervised methods.
arXiv Detail & Related papers (2022-12-01T15:11:21Z) - Unsupervised Learning of Fine Structure Generation for 3D Point Clouds
by 2D Projection Matching [66.98712589559028]
We propose an unsupervised approach for 3D point cloud generation with fine structures.
Our method can recover fine 3D structures from 2D silhouette images at different resolutions.
arXiv Detail & Related papers (2021-08-08T22:15:31Z) - Privacy Preserving Visual SLAM [11.80598014760818]
This study proposes a privacy-preserving Visual SLAM framework for estimating camera poses and performing bundle adjustment with mixed line and point clouds in real time.
Previous studies have proposed localization methods to estimate a camera pose using a line-cloud map for a single image or a reconstructed point cloud.
Our framework achieves the intended privacy-preserving formation and real-time performance using a line-cloud map.
arXiv Detail & Related papers (2020-07-20T18:00:06Z) - ImVoteNet: Boosting 3D Object Detection in Point Clouds with Image Votes [93.82668222075128]
We propose a 3D detection architecture called ImVoteNet for RGB-D scenes.
ImVoteNet is based on fusing 2D votes in images and 3D votes in point clouds.
We validate our model on the challenging SUN RGB-D dataset.
arXiv Detail & Related papers (2020-01-29T05:09:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.