Scan2Plan: Efficient Floorplan Generation from 3D Scans of Indoor Scenes
- URL: http://arxiv.org/abs/2003.07356v1
- Date: Mon, 16 Mar 2020 17:59:41 GMT
- Title: Scan2Plan: Efficient Floorplan Generation from 3D Scans of Indoor Scenes
- Authors: Ameya Phalak, Vijay Badrinarayanan, Andrew Rabinovich
- Abstract summary: Scan2Plan is a novel approach for accurate estimation of a floorplan from a 3D scan of the structural elements of indoor environments.
The proposed method incorporates a two-stage approach where the initial stage clusters an unordered point cloud representation of the scene.
The subsequent stage estimates a closed perimeter, parameterized by a simple polygon, for each individual room.
The final floorplan is simply an assembly of all such room perimeters in the global co-ordinate system.
- Score: 9.71137838903781
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Scan2Plan, a novel approach for accurate estimation of a
floorplan from a 3D scan of the structural elements of indoor environments. The
proposed method incorporates a two-stage approach where the initial stage
clusters an unordered point cloud representation of the scene into room
instances and wall instances using a deep neural network based voting approach.
The subsequent stage estimates a closed perimeter, parameterized by a simple
polygon, for each individual room by finding the shortest path along the
predicted room and wall keypoints. The final floorplan is simply an assembly of
all such room perimeters in the global co-ordinate system. The Scan2Plan
pipeline produces accurate floorplans for complex layouts, is highly
parallelizable and extremely efficient compared to existing methods. The voting
module is trained only on synthetic data and evaluated on publicly available
Structured3D and BKE datasets to demonstrate excellent qualitative and
quantitative results outperforming state-of-the-art techniques.
Related papers
- ALSTER: A Local Spatio-Temporal Expert for Online 3D Semantic
Reconstruction [62.599588577671796]
We propose an online 3D semantic segmentation method that incrementally reconstructs a 3D semantic map from a stream of RGB-D frames.
Unlike offline methods, ours is directly applicable to scenarios with real-time constraints, such as robotics or mixed reality.
arXiv Detail & Related papers (2023-11-29T20:30:18Z) - PointOcc: Cylindrical Tri-Perspective View for Point-based 3D Semantic
Occupancy Prediction [72.75478398447396]
We propose a cylindrical tri-perspective view to represent point clouds effectively and comprehensively.
Considering the distance distribution of LiDAR point clouds, we construct the tri-perspective view in the cylindrical coordinate system.
We employ spatial group pooling to maintain structural details during projection and adopt 2D backbones to efficiently process each TPV plane.
arXiv Detail & Related papers (2023-08-31T17:57:17Z) - A Hybrid Semantic-Geometric Approach for Clutter-Resistant Floorplan
Generation from Building Point Clouds [2.0859227544921874]
This research proposes a hybrid semantic-geometric approach for clutter-resistant floorplan generation from laser-scanned building point clouds.
The proposed method is evaluated using the metrics of precision, recall, Intersection-over-Union (IOU), Betti error, and warping error.
arXiv Detail & Related papers (2023-05-15T20:08:43Z) - 2D Floor Plan Segmentation Based on Down-sampling [1.4502611532302039]
We propose a novel 2D floor plan segmentation technique based on a down-sampling approach.
Our method employs continuous down-sampling on a floor plan to maintain its structural information while reducing its complexity.
arXiv Detail & Related papers (2023-03-24T04:39:50Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Neural 3D Scene Reconstruction with the Manhattan-world Assumption [58.90559966227361]
This paper addresses the challenge of reconstructing 3D indoor scenes from multi-view images.
Planar constraints can be conveniently integrated into the recent implicit neural representation-based reconstruction methods.
The proposed method outperforms previous methods by a large margin on 3D reconstruction quality.
arXiv Detail & Related papers (2022-05-05T17:59:55Z) - 360-DFPE: Leveraging Monocular 360-Layouts for Direct Floor Plan
Estimation [43.56963653723287]
We present 360-DFPE, a sequential floor plan estimation method that directly takes 360-images as input without relying on active sensors or 3D information.
Our results show that our monocular solution achieves favorable performance against the current state-of-the-art algorithms.
arXiv Detail & Related papers (2021-12-12T08:36:41Z) - MonteFloor: Extending MCTS for Reconstructing Accurate Large-Scale Floor
Plans [41.31546857809168]
We propose a novel method for reconstructing floor plans from noisy 3D point clouds.
Our main contribution is a principled approach that relies on the Monte Carlo Tree Search (MCTS) algorithm.
We evaluate our method on the recent and challenging Structured3D and Floor-SP datasets.
arXiv Detail & Related papers (2021-03-20T11:36:49Z) - 3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure
Prior [50.73148041205675]
The goal of the Semantic Scene Completion (SSC) task is to simultaneously predict a completed 3D voxel representation of volumetric occupancy and semantic labels of objects in the scene from a single-view observation.
We propose to devise a new geometry-based strategy to embed depth information with low-resolution voxel representation.
Our proposed geometric embedding works better than the depth feature learning from habitual SSC frameworks.
arXiv Detail & Related papers (2020-03-31T09:33:46Z) - Plane Pair Matching for Efficient 3D View Registration [7.920114031312631]
We present a novel method to estimate the motion matrix between overlapping pairs of 3D views in the context of indoor scenes.
We use the Manhattan world assumption to introduce lightweight geometric constraints under the form of planes quadri into the problem.
We validate our approach on a toy example and present quantitative experiments on a public RGB-D dataset, comparing against recent state-of-the-art methods.
arXiv Detail & Related papers (2020-01-20T11:15:26Z) - Learning multiview 3D point cloud registration [74.39499501822682]
We present a novel, end-to-end learnable, multiview 3D point cloud registration algorithm.
Our approach outperforms the state-of-the-art by a significant margin, while being end-to-end trainable and computationally less costly.
arXiv Detail & Related papers (2020-01-15T03:42:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.