SelfGeo: Self-supervised and Geodesic-consistent Estimation of Keypoints on Deformable Shapes
- URL: http://arxiv.org/abs/2408.02291v1
- Date: Mon, 5 Aug 2024 08:00:30 GMT
- Title: SelfGeo: Self-supervised and Geodesic-consistent Estimation of Keypoints on Deformable Shapes
- Authors: Mohammad Zohaib, Luca Cosmo, Alessio Del Bue,
- Abstract summary: "SelfGeo" is a self-supervised method that computes persistent 3D keypoints of non-rigid objects from arbitrary PCDs without the need of human annotations.
Our main contribution is to enforce that keypoints deform along with the shape while keeping constant geodesic distances among them.
We show experimentally that the use of geodesic has a clear advantage in challenging dynamic scenes.
- Score: 19.730602733938216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised 3D keypoints estimation from Point Cloud Data (PCD) is a complex task, even more challenging when an object shape is deforming. As keypoints should be semantically and geometrically consistent across all the 3D frames - each keypoint should be anchored to a specific part of the deforming shape irrespective of intrinsic and extrinsic motion. This paper presents, "SelfGeo", a self-supervised method that computes persistent 3D keypoints of non-rigid objects from arbitrary PCDs without the need of human annotations. The gist of SelfGeo is to estimate keypoints between frames that respect invariant properties of deforming bodies. Our main contribution is to enforce that keypoints deform along with the shape while keeping constant geodesic distances among them. This principle is then propagated to the design of a set of losses which minimization let emerge repeatable keypoints in specific semantic locations of the non-rigid shape. We show experimentally that the use of geodesic has a clear advantage in challenging dynamic scenes and with different classes of deforming shapes (humans and animals). Code and data are available at: https://github.com/IIT-PAVIS/SelfGeo
Related papers
- Zero-Shot 3D Shape Correspondence [67.18775201037732]
We propose a novel zero-shot approach to computing correspondences between 3D shapes.
We exploit the exceptional reasoning capabilities of recent foundation models in language and vision.
Our approach produces highly plausible results in a zero-shot manner, especially between strongly non-isometric shapes.
arXiv Detail & Related papers (2023-06-05T21:14:23Z) - Few-shot Geometry-Aware Keypoint Localization [13.51645400661565]
We present a novel formulation that learns to localize semantically consistent keypoint definitions.
We use a few user-labeled 2D images as input examples, which are extended via self-supervision.
We introduce 3D geometry-aware constraints to uplift keypoints, achieving more accurate 2D localization.
arXiv Detail & Related papers (2023-03-30T08:19:42Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Piecewise Planar Hulls for Semi-Supervised Learning of 3D Shape and Pose
from 2D Images [133.68032636906133]
We study the problem of estimating 3D shape and pose of an object in terms of keypoints, from a single 2D image.
The shape and pose are learned directly from images collected by categories and their partial 2D keypoint annotations.
arXiv Detail & Related papers (2022-11-14T16:18:11Z) - SNAKE: Shape-aware Neural 3D Keypoint Field [62.91169625183118]
Detecting 3D keypoints from point clouds is important for shape reconstruction.
This work investigates the dual question: can shape reconstruction benefit 3D keypoint detection?
We propose a novel unsupervised paradigm named SNAKE, which is short for shape-aware neural 3D keypoint field.
arXiv Detail & Related papers (2022-06-03T17:58:43Z) - KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control [64.46042014759671]
KeypointDeformer is an unsupervised method for shape control through automatically discovered 3D keypoints.
Our approach produces intuitive and semantically consistent control of shape deformations.
arXiv Detail & Related papers (2021-04-22T17:59:08Z) - UKPGAN: A General Self-Supervised Keypoint Detector [43.35270822722044]
UKPGAN is a general self-supervised 3D keypoint detector.
Our keypoints align well with human annotated keypoint labels.
Our model is stable under both rigid and non-rigid transformations.
arXiv Detail & Related papers (2020-11-24T09:08:21Z) - Unsupervised Learning of Category-Specific Symmetric 3D Keypoints from
Point Sets [71.84892018102465]
This paper aims at learning category-specific 3D keypoints, in an unsupervised manner, using a collection of misaligned 3D point clouds of objects from an unknown category.
To the best of our knowledge, this is the first work on learning such keypoints directly from 3D point clouds.
arXiv Detail & Related papers (2020-03-17T10:28:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.