Enforcing Topological Interaction between Implicit Surfaces via Uniform
Sampling
- URL: http://arxiv.org/abs/2307.08716v1
- Date: Sun, 16 Jul 2023 10:07:15 GMT
- Title: Enforcing Topological Interaction between Implicit Surfaces via Uniform
Sampling
- Authors: Hieu Le, Nicolas Talabot, Jiancheng Yang, Pascal Fua
- Abstract summary: We propose a novel method to refine 3D object representations, ensuring that their surfaces adhere to a topological prior.
Our proposed method enables accurate 3D reconstruction of human hearts, ensuring proper topological connectivity between components.
- Score: 54.545963674457575
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Objects interact with each other in various ways, including containment,
contact, or maintaining fixed distances. Ensuring these topological
interactions is crucial for accurate modeling in many scenarios. In this paper,
we propose a novel method to refine 3D object representations, ensuring that
their surfaces adhere to a topological prior. Our key observation is that the
object interaction can be observed via a stochastic approximation method: the
statistic of signed distances between a large number of random points to the
object surfaces reflect the interaction between them. Thus, the object
interaction can be indirectly manipulated by using choosing a set of points as
anchors to refine the object surfaces. In particular, we show that our method
can be used to enforce two objects to have a specific contact ratio while
having no surface intersection. The conducted experiments show that our
proposed method enables accurate 3D reconstruction of human hearts, ensuring
proper topological connectivity between components. Further, we show that our
proposed method can be used to simulate various ways a hand can interact with
an arbitrary object.
Related papers
- Spatial and Surface Correspondence Field for Interaction Transfer [27.250373252507547]
We introduce a new method for the task of interaction transfer.
Our method characterizes the example interaction using a combined spatial and surface representation.
Experiments conducted on human-chair and hand-mug interaction transfer tasks show that our approach can handle larger geometry and topology variations.
arXiv Detail & Related papers (2024-05-06T07:30:31Z) - Implicit Modeling of Non-rigid Objects with Cross-Category Signals [28.956412015920936]
MODIF is a multi-object deep implicit function that jointly learns the deformation fields and instance-specific latent codes for multiple objects at once.
We show that MODIF can proficiently learn the shape representation of each organ and their relations to others, to the point that shapes missing from unseen instances can be consistently recovered.
arXiv Detail & Related papers (2023-12-15T22:34:17Z) - LEMON: Learning 3D Human-Object Interaction Relation from 2D Images [56.6123961391372]
Learning 3D human-object interaction relation is pivotal to embodied AI and interaction modeling.
Most existing methods approach the goal by learning to predict isolated interaction elements.
We present LEMON, a unified model that mines interaction intentions of the counterparts and employs curvatures to guide the extraction of geometric correlations.
arXiv Detail & Related papers (2023-12-14T14:10:57Z) - Controllable Human-Object Interaction Synthesis [77.56877961681462]
We propose Controllable Human-Object Interaction Synthesis (CHOIS) to generate synchronized object motion and human motion in 3D scenes.
Here, language descriptions inform style and intent, and waypoints, which can be effectively extracted from high-level planning, ground the motion in the scene.
Our module seamlessly integrates with a path planning module, enabling the generation of long-term interactions in 3D environments.
arXiv Detail & Related papers (2023-12-06T21:14:20Z) - InterTracker: Discovering and Tracking General Objects Interacting with
Hands in the Wild [40.489171608114574]
Existing methods rely on frame-based detectors to locate interacting objects.
We propose to leverage hand-object interaction to track interactive objects.
Our proposed method outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2023-08-06T09:09:17Z) - Interacting Hand-Object Pose Estimation via Dense Mutual Attention [97.26400229871888]
3D hand-object pose estimation is the key to the success of many computer vision applications.
We propose a novel dense mutual attention mechanism that is able to model fine-grained dependencies between the hand and the object.
Our method is able to produce physically plausible poses with high quality and real-time inference speed.
arXiv Detail & Related papers (2022-11-16T10:01:33Z) - Learning to Disambiguate Strongly Interacting Hands via Probabilistic
Per-pixel Part Segmentation [84.28064034301445]
Self-similarity, and the resulting ambiguities in assigning pixel observations to the respective hands, is a major cause of the final 3D pose error.
We propose DIGIT, a novel method for estimating the 3D poses of two interacting hands from a single monocular image.
We experimentally show that the proposed approach achieves new state-of-the-art performance on the InterHand2.6M dataset.
arXiv Detail & Related papers (2021-07-01T13:28:02Z) - Continuous Surface Embeddings [76.86259029442624]
We focus on the task of learning and representing dense correspondences in deformable object categories.
We propose a new, learnable image-based representation of dense correspondences.
We demonstrate that the proposed approach performs on par or better than the state-of-the-art methods for dense pose estimation for humans.
arXiv Detail & Related papers (2020-11-24T22:52:15Z) - Measuring shape relations using r-parallel sets [0.5249805590164901]
We present a theory on the geometrical interaction between objects based on the theory of spatial point processes.
Our measures are simple like the volume and area of an object, but describe further details about the shape of individual objects and their pairwise geometrical relation.
arXiv Detail & Related papers (2020-08-10T07:30:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.