Point Cloud Registration of non-rigid objects in sparse 3D Scans with
applications in Mixed Reality
- URL: http://arxiv.org/abs/2212.03856v1
- Date: Wed, 7 Dec 2022 18:54:32 GMT
- Title: Point Cloud Registration of non-rigid objects in sparse 3D Scans with
applications in Mixed Reality
- Authors: Manorama Jha
- Abstract summary: We study the problem of non-rigid point cloud registration for use cases in the Augmented/Mixed Reality domain.
We focus our attention on a special class of non-rigid deformations that happen in rigid objects with parts that move relative to one another.
We propose an efficient and robust point-cloud registration workflow for such objects and evaluate it on real-world data collected using Microsoft Hololens 2.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point Cloud Registration is the problem of aligning the corresponding points
of two 3D point clouds referring to the same object. The challenges include
dealing with noise and partial match of real-world 3D scans. For non-rigid
objects, there is an additional challenge of accounting for deformations in the
object shape that happen to the object in between the two 3D scans. In this
project, we study the problem of non-rigid point cloud registration for use
cases in the Augmented/Mixed Reality domain. We focus our attention on a
special class of non-rigid deformations that happen in rigid objects with parts
that move relative to one another about joints, for example, robots with hands
and machines with hinges. We propose an efficient and robust point-cloud
registration workflow for such objects and evaluate it on real-world data
collected using Microsoft Hololens 2, a leading Mixed Reality Platform.
Related papers
- Precise Workcell Sketching from Point Clouds Using an AR Toolbox [1.249418440326334]
Capturing real-world 3D spaces as point clouds is efficient and descriptive, but it comes with sensor errors and lacks object parametrization.
Our method for 3D workcell sketching from point clouds allows users to refine raw point clouds using an Augmented Reality interface.
By utilizing a toolbox and an AR-enabled pointing device, users can enhance point cloud accuracy based on the device's position in 3D space.
arXiv Detail & Related papers (2024-10-01T08:07:51Z) - Bridged Transformer for Vision and Point Cloud 3D Object Detection [92.86856146086316]
Bridged Transformer (BrT) is an end-to-end architecture for 3D object detection.
BrT learns to identify 3D and 2D object bounding boxes from both points and image patches.
We experimentally show that BrT surpasses state-of-the-art methods on SUN RGB-D and ScanNetV2 datasets.
arXiv Detail & Related papers (2022-10-04T05:44:22Z) - Exploiting More Information in Sparse Point Cloud for 3D Single Object
Tracking [9.693724357115762]
3D single object tracking is a key task in 3D computer vision.
The sparsity of point clouds makes it difficult to compute the similarity and locate the object.
We propose a sparse-to-dense and transformer-based framework for 3D single object tracking.
arXiv Detail & Related papers (2022-10-02T13:38:30Z) - Points2NeRF: Generating Neural Radiance Fields from 3D point cloud [0.0]
We propose representing 3D objects as Neural Radiance Fields (NeRFs)
We leverage a hypernetwork paradigm and train the model to take a 3D point cloud with the associated color values.
Our method provides efficient 3D object representation and offers several advantages over the existing approaches.
arXiv Detail & Related papers (2022-06-02T20:23:33Z) - HyperPocket: Generative Point Cloud Completion [19.895219420937938]
We introduce a novel autoencoder-based architecture called HyperPocket that disentangles latent representations.
We leverage a hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the missing object parts.
Our method offers competitive performances to the other state-of-the-art models.
arXiv Detail & Related papers (2021-02-11T12:30:03Z) - 3D Object Classification on Partial Point Clouds: A Practical
Perspective [91.81377258830703]
A point cloud is a popular shape representation adopted in 3D object classification.
This paper introduces a practical setting to classify partial point clouds of object instances under any poses.
A novel algorithm in an alignment-classification manner is proposed in this paper.
arXiv Detail & Related papers (2020-12-18T04:00:56Z) - An Overview Of 3D Object Detection [21.159668390764832]
We propose a framework that uses both RGB and point cloud data to perform multiclass object recognition.
We use the recently released nuScenes dataset---a large-scale dataset contains many data formats---to training and evaluate our proposed architecture.
arXiv Detail & Related papers (2020-10-29T14:04:50Z) - Associate-3Ddet: Perceptual-to-Conceptual Association for 3D Point Cloud
Object Detection [64.2159881697615]
Object detection from 3D point clouds remains a challenging task, though recent studies pushed the envelope with the deep learning techniques.
We propose a domain adaptation like approach to enhance the robustness of the feature representation.
Our simple yet effective approach fundamentally boosts the performance of 3D point cloud object detection and achieves the state-of-the-art results.
arXiv Detail & Related papers (2020-06-08T05:15:06Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z) - ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds [78.25501874120489]
We develop shape-aware adversarial 3D point cloud attacks by leveraging the learned latent space of a point cloud auto-encoder.
Different from prior works, the resulting adversarial 3D point clouds reflect the shape variations in the 3D point cloud space while still being close to the original one.
arXiv Detail & Related papers (2020-05-24T00:03:27Z) - ImVoteNet: Boosting 3D Object Detection in Point Clouds with Image Votes [93.82668222075128]
We propose a 3D detection architecture called ImVoteNet for RGB-D scenes.
ImVoteNet is based on fusing 2D votes in images and 3D votes in point clouds.
We validate our model on the challenging SUN RGB-D dataset.
arXiv Detail & Related papers (2020-01-29T05:09:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.