Self-training Room Layout Estimation via Geometry-aware Ray-casting
- URL: http://arxiv.org/abs/2407.15041v1
- Date: Sun, 21 Jul 2024 03:25:55 GMT
- Title: Self-training Room Layout Estimation via Geometry-aware Ray-casting
- Authors: Bolivar Solarte, Chin-Hsuan Wu, Jin-Cheng Jhang, Jonathan Lee, Yi-Hsuan Tsai, Min Sun,
- Abstract summary: We introduce a geometry-aware self-training framework for room layout estimation models on unseen scenes with unlabeled data.
Our approach utilizes a ray-casting formulation to aggregate multiple estimates from different viewing positions.
- Score: 27.906107629563852
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we introduce a novel geometry-aware self-training framework for room layout estimation models on unseen scenes with unlabeled data. Our approach utilizes a ray-casting formulation to aggregate multiple estimates from different viewing positions, enabling the computation of reliable pseudo-labels for self-training. In particular, our ray-casting approach enforces multi-view consistency along all ray directions and prioritizes spatial proximity to the camera view for geometry reasoning. As a result, our geometry-aware pseudo-labels effectively handle complex room geometries and occluded walls without relying on assumptions such as Manhattan World or planar room walls. Evaluation on publicly available datasets, including synthetic and real-world scenarios, demonstrates significant improvements in current state-of-the-art layout models without using any human annotation.
Related papers
- Self-supervised 3D Point Cloud Completion via Multi-view Adversarial Learning [61.14132533712537]
We propose MAL-SPC, a framework that effectively leverages both object-level and category-specific geometric similarities to complete missing structures.
Our MAL-SPC does not require any 3D complete supervision and only necessitates a single partial point cloud for each object.
arXiv Detail & Related papers (2024-07-13T06:53:39Z) - Robust Geometry-Preserving Depth Estimation Using Differentiable
Rendering [93.94371335579321]
We propose a learning framework that trains models to predict geometry-preserving depth without requiring extra data or annotations.
Comprehensive experiments underscore our framework's superior generalization capabilities.
Our innovative loss functions empower the model to autonomously recover domain-specific scale-and-shift coefficients.
arXiv Detail & Related papers (2023-09-18T12:36:39Z) - Neural Semantic Surface Maps [52.61017226479506]
We present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another.
Our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement.
arXiv Detail & Related papers (2023-09-09T16:21:56Z) - GARF:Geometry-Aware Generalized Neural Radiance Field [47.76524984421343]
We propose Geometry-Aware Generalized Neural Radiance Field (GARF) with a geometry-aware dynamic sampling (GADS) strategy.
Our framework infers the unseen scenes on both pixel-scale and geometry-scale with only a few input images.
Experiments on indoor and outdoor datasets show that GARF reduces samples by more than 25%, while improving rendering quality and 3D geometry estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - 360-MLC: Multi-view Layout Consistency for Self-training and
Hyper-parameter Tuning [40.93848397359068]
We present 360-MLC, a self-training method based on multi-view layout consistency for finetuning monocular room- models.
We leverage the entropy information in multiple layout estimations as a quantitative metric to measure the geometry consistency of the scene.
arXiv Detail & Related papers (2022-10-24T03:31:48Z) - Towards Self-Supervised Category-Level Object Pose and Size Estimation [121.28537953301951]
This work presents a self-supervised framework for category-level object pose and size estimation from a single depth image.
We leverage the geometric consistency residing in point clouds of the same shape for self-supervision.
arXiv Detail & Related papers (2022-03-06T06:02:30Z) - Geometry-Aware Self-Training for Unsupervised Domain Adaptationon Object
Point Clouds [36.49322708074682]
This paper proposes a new method of geometry-aware self-training (GAST) for unsupervised domain adaptation of object point cloud classification.
Specifically, this paper aims to learn a domain-shared representation of semantic categories, via two novel self-supervised geometric learning tasks as feature regularization.
On the other hand, a diverse point distribution across datasets can be normalized with a novel curvature-aware distortion localization.
arXiv Detail & Related papers (2021-08-20T13:29:11Z) - Visual SLAM with Graph-Cut Optimized Multi-Plane Reconstruction [11.215334675788952]
This paper presents a semantic planar SLAM system that improves pose estimation and mapping using cues from an instance planar segmentation network.
While the mainstream approaches are using RGB-D sensors, employing a monocular camera with such a system still faces challenges such as robust data association and precise geometric model fitting.
arXiv Detail & Related papers (2021-08-09T18:16:08Z) - Hermitian Symmetric Spaces for Graph Embeddings [0.0]
We learn continuous representations of graphs in spaces of symmetric matrices over C.
These spaces offer a rich geometry that simultaneously admits hyperbolic and Euclidean subspaces.
The proposed models are able to automatically adapt to very dissimilar arrangements without any apriori estimates of graph features.
arXiv Detail & Related papers (2021-05-11T18:14:52Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection
Consistency [114.02182755620784]
We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion and depth in a monocular camera setup without supervision.
Our framework is shown to outperform the state-of-the-art depth and motion estimation methods.
arXiv Detail & Related papers (2021-02-04T14:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.