Privacy Preserving Visual SLAM
- URL: http://arxiv.org/abs/2007.10361v2
- Date: Mon, 27 Jul 2020 07:34:46 GMT
- Title: Privacy Preserving Visual SLAM
- Authors: Mikiya Shibuya, Shinya Sumikura, and Ken Sakurada
- Abstract summary: This study proposes a privacy-preserving Visual SLAM framework for estimating camera poses and performing bundle adjustment with mixed line and point clouds in real time.
Previous studies have proposed localization methods to estimate a camera pose using a line-cloud map for a single image or a reconstructed point cloud.
Our framework achieves the intended privacy-preserving formation and real-time performance using a line-cloud map.
- Score: 11.80598014760818
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study proposes a privacy-preserving Visual SLAM framework for estimating
camera poses and performing bundle adjustment with mixed line and point clouds
in real time. Previous studies have proposed localization methods to estimate a
camera pose using a line-cloud map for a single image or a reconstructed point
cloud. These methods offer a scene privacy protection against the inversion
attacks by converting a point cloud to a line cloud, which reconstruct the
scene images from the point cloud. However, they are not directly applicable to
a video sequence because they do not address computational efficiency. This is
a critical issue to solve for estimating camera poses and performing bundle
adjustment with mixed line and point clouds in real time. Moreover, there has
been no study on a method to optimize a line-cloud map of a server with a point
cloud reconstructed from a client video because any observation points on the
image coordinates are not available to prevent the inversion attacks, namely
the reversibility of the 3D lines. The experimental results with synthetic and
real data show that our Visual SLAM framework achieves the intended
privacy-preserving formation and real-time performance using a line-cloud map.
Related papers
- PointRecon: Online Point-based 3D Reconstruction via Ray-based 2D-3D Matching [10.5792547614413]
We propose a novel online, point-based 3D reconstruction method from posed monocular RGB videos.
Our model maintains a global point cloud representation of the scene, continuously updating the features and 3D locations of points as new images are observed.
Experiments on the ScanNet dataset show that our method achieves state-of-the-art quality among online MVS approaches.
arXiv Detail & Related papers (2024-10-30T17:29:25Z) - PointRegGPT: Boosting 3D Point Cloud Registration using Generative Point-Cloud Pairs for Training [90.06520673092702]
We present PointRegGPT, boosting 3D point cloud registration using generative point-cloud pairs for training.
To our knowledge, this is the first generative approach that explores realistic data generation for indoor point cloud registration.
arXiv Detail & Related papers (2024-07-19T06:29:57Z) - GPN: Generative Point-based NeRF [0.65268245109828]
We propose using Generative Point-based NeRF (GPN) to reconstruct and repair a partial cloud.
The repaired point cloud can achieve multi-view consistency with the captured images at high spatial resolution.
arXiv Detail & Related papers (2024-04-12T08:14:17Z) - OmniColor: A Global Camera Pose Optimization Approach of LiDAR-360Camera Fusion for Colorizing Point Clouds [15.11376768491973]
A Colored point cloud, as a simple and efficient 3D representation, has many advantages in various fields.
This paper presents OmniColor, a novel and efficient algorithm to colorize point clouds using an independent 360-degree camera.
arXiv Detail & Related papers (2024-04-06T17:41:36Z) - Few-shot point cloud reconstruction and denoising via learned Guassian splats renderings and fine-tuned diffusion features [52.62053703535824]
We propose a method to reconstruct point clouds from few images and to denoise point clouds from their rendering.
To improve reconstruction in constraint settings, we regularize the training of a differentiable with hybrid surface and appearance.
We demonstrate how these learned filters can be used to remove point cloud noise coming without 3D supervision.
arXiv Detail & Related papers (2024-04-01T13:38:16Z) - Zero-Shot Point Cloud Registration [94.39796531154303]
ZeroReg is the first zero-shot point cloud registration approach that eliminates the need for training on point cloud datasets.
The cornerstone of ZeroReg is the novel transfer of image features from keypoints to the point cloud, enriched by aggregating information from 3D geometric neighborhoods.
On benchmarks such as 3DMatch, 3DLoMatch, and ScanNet, ZeroReg achieves impressive Recall Ratios (RR) of over 84%, 46%, and 75%, respectively.
arXiv Detail & Related papers (2023-12-05T11:33:16Z) - PRED: Pre-training via Semantic Rendering on LiDAR Point Clouds [18.840000859663153]
We propose PRED, a novel image-assisted pre-training framework for outdoor point clouds.
The main ingredient of our framework is a Birds-Eye-View (BEV) feature map conditioned semantic rendering.
We further enhance our model's performance by incorporating point-wise masking with a high mask ratio.
arXiv Detail & Related papers (2023-11-08T07:26:09Z) - Leveraging Single-View Images for Unsupervised 3D Point Cloud Completion [53.93172686610741]
Cross-PCC is an unsupervised point cloud completion method without requiring any 3D complete point clouds.
To take advantage of the complementary information from 2D images, we use a single-view RGB image to extract 2D features.
Our method even achieves comparable performance to some supervised methods.
arXiv Detail & Related papers (2022-12-01T15:11:21Z) - Shape-invariant 3D Adversarial Point Clouds [111.72163188681807]
Adversary and invisibility are two fundamental but conflict characters of adversarial perturbations.
Previous adversarial attacks on 3D point cloud recognition have often been criticized for their noticeable point outliers.
We propose a novel Point-Cloud Sensitivity Map to boost both the efficiency and imperceptibility of point perturbations.
arXiv Detail & Related papers (2022-03-08T12:21:35Z) - How Privacy-Preserving are Line Clouds? Recovering Scene Details from 3D
Lines [49.06411148698547]
This paper shows that a significant amount of information about the 3D scene geometry is preserved in line clouds.
Our approach is based on the observation that the closest points between lines can yield a good approximation to the original 3D points.
arXiv Detail & Related papers (2021-03-08T21:32:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.