Training-free zero-shot 3D symmetry detection with visual features back-projected to geometry
- URL: http://arxiv.org/abs/2505.24162v1
- Date: Fri, 30 May 2025 03:09:18 GMT
- Title: Training-free zero-shot 3D symmetry detection with visual features back-projected to geometry
- Authors: Isaac Aguirre, Ivan Sipiran,
- Abstract summary: We present a training-free approach for zero-shot 3D symmetry detection that leverages visual features from foundation vision models such as DINOv2.<n>Our work demonstrates how foundation vision models can help in solving complex 3D geometric problems such as symmetry detection.
- Score: 0.6445605125467574
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a simple yet effective training-free approach for zero-shot 3D symmetry detection that leverages visual features from foundation vision models such as DINOv2. Our method extracts features from rendered views of 3D objects and backprojects them onto the original geometry. We demonstrate the symmetric invariance of these features and use them to identify reflection-symmetry planes through a proposed algorithm. Experiments on a subset of ShapeNet demonstrate that our approach outperforms both traditional geometric methods and learning-based approaches without requiring any training data. Our work demonstrates how foundation vision models can help in solving complex 3D geometric problems such as symmetry detection.
Related papers
- Symmetry Strikes Back: From Single-Image Symmetry Detection to 3D Generation [29.732780338284353]
We introduce Reflect3D, a scalable, zero-shot symmetry detector capable of robust generalization to diverse and real-world scenarios.<n>We show the practical benefit of incorporating detected symmetry into single-image 3D generation pipelines.
arXiv Detail & Related papers (2024-11-26T04:14:31Z) - Partial Symmetry Detection for 3D Geometry using Contrastive Learning
with Geodesic Point Cloud Patches [10.48309709793733]
We propose to learn rotation, reflection, translation and scale invariant local shape features for geodesic point cloud patches.
We show that our approach is able to extract multiple valid solutions for this ambiguous problem.
We incorporate the detected symmetries together with a region growing algorithm to demonstrate a downstream task.
arXiv Detail & Related papers (2023-12-13T15:48:50Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - SIDER: Single-Image Neural Optimization for Facial Geometric Detail
Recovery [54.64663713249079]
SIDER is a novel photometric optimization method that recovers detailed facial geometry from a single image in an unsupervised manner.
In contrast to prior work, SIDER does not rely on any dataset priors and does not require additional supervision from multiple views, lighting changes or ground truth 3D shape.
arXiv Detail & Related papers (2021-08-11T22:34:53Z) - Learning Geometry-Guided Depth via Projective Modeling for Monocular 3D Object Detection [70.71934539556916]
We learn geometry-guided depth estimation with projective modeling to advance monocular 3D object detection.
Specifically, a principled geometry formula with projective modeling of 2D and 3D depth predictions in the monocular 3D object detection network is devised.
Our method remarkably improves the detection performance of the state-of-the-art monocular-based method without extra data by 2.80% on the moderate test setting.
arXiv Detail & Related papers (2021-07-29T12:30:39Z) - Joint Deep Multi-Graph Matching and 3D Geometry Learning from
Inhomogeneous 2D Image Collections [57.60094385551773]
We propose a trainable framework for learning a deformable 3D geometry model from inhomogeneous image collections.
We in addition obtain the underlying 3D geometry of the objects depicted in the 2D images.
arXiv Detail & Related papers (2021-03-31T17:25:36Z) - Learning 3D Face Reconstruction with a Pose Guidance Network [49.13404714366933]
We present a self-supervised learning approach to learning monocular 3D face reconstruction with a pose guidance network (PGN)
First, we unveil the bottleneck of pose estimation in prior parametric 3D face learning methods, and propose to utilize 3D face landmarks for estimating pose parameters.
With our specially designed PGN, our model can learn from both faces with fully labeled 3D landmarks and unlimited unlabeled in-the-wild face images.
arXiv Detail & Related papers (2020-10-09T06:11:17Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z) - SymmetryNet: Learning to Predict Reflectional and Rotational Symmetries
of 3D Shapes from Single-View RGB-D Images [26.38270361331076]
We propose an end-to-end deep neural network which is able to predict both reflectional and rotational symmetries of 3D objects.
We also contribute a benchmark of 3D symmetry detection based on single-view RGB-D images.
arXiv Detail & Related papers (2020-08-02T14:10:09Z) - Learning to Detect 3D Reflection Symmetry for Single-View Reconstruction [32.14605731030579]
3D reconstruction from a single RGB image is a challenging problem in computer vision.
Previous methods are usually solely data-driven, which lead to inaccurate 3D shape recovery and limited generalization capability.
We present a geometry-based end-to-end deep learning framework that first detects the mirror plane of reflection symmetry that commonly exists in man-made objects and then predicts depth maps by finding the intra-image pixel-wise correspondence of the symmetry.
arXiv Detail & Related papers (2020-06-17T17:58:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.