Precise Workcell Sketching from Point Clouds Using an AR Toolbox
- URL: http://arxiv.org/abs/2410.00479v1
- Date: Tue, 1 Oct 2024 08:07:51 GMT
- Title: Precise Workcell Sketching from Point Clouds Using an AR Toolbox
- Authors: Krzysztof Zieliński, Bruce Blumberg, Mikkel Baun Kjærgaard,
- Abstract summary: Capturing real-world 3D spaces as point clouds is efficient and descriptive, but it comes with sensor errors and lacks object parametrization.
Our method for 3D workcell sketching from point clouds allows users to refine raw point clouds using an Augmented Reality interface.
By utilizing a toolbox and an AR-enabled pointing device, users can enhance point cloud accuracy based on the device's position in 3D space.
- Score: 1.249418440326334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Capturing real-world 3D spaces as point clouds is efficient and descriptive, but it comes with sensor errors and lacks object parametrization. These limitations render point clouds unsuitable for various real-world applications, such as robot programming, without extensive post-processing (e.g., outlier removal, semantic segmentation). On the other hand, CAD modeling provides high-quality, parametric representations of 3D space with embedded semantic data, but requires manual component creation that is time-consuming and costly. To address these challenges, we propose a novel solution that combines the strengths of both approaches. Our method for 3D workcell sketching from point clouds allows users to refine raw point clouds using an Augmented Reality (AR) interface that leverages their knowledge and the real-world 3D environment. By utilizing a toolbox and an AR-enabled pointing device, users can enhance point cloud accuracy based on the device's position in 3D space. We validate our approach by comparing it with ground truth models, demonstrating that it achieves a mean error within 1cm - significant improvement over standard LiDAR scanner apps.
Related papers
- GaussianPU: A Hybrid 2D-3D Upsampling Framework for Enhancing Color Point Clouds via 3D Gaussian Splatting [11.60605616190011]
We propose a novel 2D-3D hybrid colored point cloud upsampling framework (GaussianPU) based on 3D Gaussian Splatting (3DGS) for robotic perception.
A dual scale rendered image restoration network transforms sparse point cloud renderings into dense representations.
We have made a series of enhancements to the vanilla 3DGS, enabling precise control over the number of points.
arXiv Detail & Related papers (2024-09-03T03:35:04Z) - 3DSFLabelling: Boosting 3D Scene Flow Estimation by Pseudo
Auto-labelling [21.726386822643995]
We present a novel approach to generate a large number of 3D scene flow pseudo labels for real-world LiDAR point clouds.
Specifically, we employ the assumption of rigid body motion to simulate potential object-level rigid movements in autonomous driving scenarios.
By perfectly synthesizing target point clouds based on augmented motion parameters, we easily obtain lots of 3D scene flow labels in point clouds highly consistent with real scenarios.
arXiv Detail & Related papers (2024-02-28T08:12:31Z) - DatasetNeRF: Efficient 3D-aware Data Factory with Generative Radiance Fields [68.94868475824575]
This paper introduces a novel approach capable of generating infinite, high-quality 3D-consistent 2D annotations alongside 3D point cloud segmentations.
We leverage the strong semantic prior within a 3D generative model to train a semantic decoder.
Once trained, the decoder efficiently generalizes across the latent space, enabling the generation of infinite data.
arXiv Detail & Related papers (2023-11-18T21:58:28Z) - TriVol: Point Cloud Rendering via Triple Volumes [57.305748806545026]
We present a dense while lightweight 3D representation, named TriVol, that can be combined with NeRF to render photo-realistic images from point clouds.
Our framework has excellent generalization ability to render a category of scenes/objects without fine-tuning.
arXiv Detail & Related papers (2023-03-29T06:34:12Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Point Cloud Registration of non-rigid objects in sparse 3D Scans with
applications in Mixed Reality [0.0]
We study the problem of non-rigid point cloud registration for use cases in the Augmented/Mixed Reality domain.
We focus our attention on a special class of non-rigid deformations that happen in rigid objects with parts that move relative to one another.
We propose an efficient and robust point-cloud registration workflow for such objects and evaluate it on real-world data collected using Microsoft Hololens 2.
arXiv Detail & Related papers (2022-12-07T18:54:32Z) - Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection [85.08249413137558]
LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors.
Small, distant, and incomplete objects with sparse or few points are often hard to detect.
We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space.
arXiv Detail & Related papers (2022-11-23T16:01:06Z) - Points2NeRF: Generating Neural Radiance Fields from 3D point cloud [0.0]
We propose representing 3D objects as Neural Radiance Fields (NeRFs)
We leverage a hypernetwork paradigm and train the model to take a 3D point cloud with the associated color values.
Our method provides efficient 3D object representation and offers several advantages over the existing approaches.
arXiv Detail & Related papers (2022-06-02T20:23:33Z) - Simple and Effective Synthesis of Indoor 3D Scenes [78.95697556834536]
We study the problem of immersive 3D indoor scenes from one or more images.
Our aim is to generate high-resolution images and videos from novel viewpoints.
We propose an image-to-image GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images.
arXiv Detail & Related papers (2022-04-06T17:54:46Z) - DV-Det: Efficient 3D Point Cloud Object Detection with Dynamic
Voxelization [0.0]
We propose a novel two-stage framework for the efficient 3D point cloud object detection.
We parse the raw point cloud data directly in the 3D space yet achieve impressive efficiency and accuracy.
We highlight our KITTI 3D object detection dataset with 75 FPS and on Open dataset with 25 FPS inference speed with satisfactory accuracy.
arXiv Detail & Related papers (2021-07-27T10:07:39Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.