Tracking Partially-Occluded Deformable Objects while Enforcing Geometric
Constraints
- URL: http://arxiv.org/abs/2011.00627v1
- Date: Sun, 1 Nov 2020 21:13:18 GMT
- Title: Tracking Partially-Occluded Deformable Objects while Enforcing Geometric
Constraints
- Authors: Yixuan Wang, Dale McConachie, Dmitry Berenson
- Abstract summary: We build high-fidelity physics simulation to aid in tracking a deformable object.
We focus on the object based on RGBD images and geometric motion estimates.
Our contributions allow us to outperform previous methods by a large margin in terms of accuracy.
- Score: 11.247580943940916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to manipulate a deformable object, such as rope or cloth, in
unstructured environments, robots need a way to estimate its current shape.
However, tracking the shape of a deformable object can be challenging because
of the object's high flexibility, (self-)occlusion, and interaction with
obstacles. Building a high-fidelity physics simulation to aid in tracking is
difficult for novel environments. Instead we focus on tracking the object based
on RGBD images and geometric motion estimates and obstacles. Our key
contributions over previous work in this vein are: 1) A better way to handle
severe occlusion by using a motion model to regularize the tracking estimate;
and 2) The formulation of \textit{convex} geometric constraints, which allow us
to prevent self-intersection and penetration into known obstacles via a
post-processing step. These contributions allow us to outperform previous
methods by a large margin in terms of accuracy in scenarios with severe
occlusion and obstacles.
Related papers
- IoUCert: Robustness Verification for Anchor-based Object Detectors [58.35703549470485]
We introduce IoUCert, a novel formal verification framework designed specifically to overcome these bottlenecks in anchor-based object detection architectures.<n>We show that our method enables the robustness verification of realistic, anchor-based models including SSD, YOLOv2, and YOLOv3 variants against various input perturbations.
arXiv Detail & Related papers (2026-03-03T14:36:46Z) - GeoMotion: Rethinking Motion Segmentation via Latent 4D Geometry [61.24189040578178]
We propose a fully learning-based approach that directly infers moving objects from latent feature representations via attention mechanisms.<n>Our key insight is to bypass explicit correspondence estimation and instead let the model learn to implicitly disentangle object and camera motion.<n>Our approach achieves state-of-the-art motion segmentation performance with high efficiency.
arXiv Detail & Related papers (2026-02-25T11:36:33Z) - Simulation-Ready Cluttered Scene Estimation via Physics-aware Joint Shape and Pose Optimization [27.083888910311984]
Estimating simulation-ready scenes from real-world observations is crucial for downstream planning and policy learning tasks.<n>Existing methods struggle in cluttered environments.<n>We propose a unified optimization-based formulation for real-to-sim scene estimation.
arXiv Detail & Related papers (2026-02-23T18:58:24Z) - Latent Diffeomorphic Co-Design of End-Effectors for Deformable and Fragile Object Manipulation [11.839375212218412]
We present the first co-design framework that jointly optimize end-effector morphology and manipulation control for deformable and fragile object manipulation.<n>We evaluate our approach on challenging food manipulation tasks, including grasping and pushing jelly and scooping fillets.
arXiv Detail & Related papers (2026-02-20T00:33:20Z) - Kinematify: Open-Vocabulary Synthesis of High-DoF Articulated Objects [59.51185639557874]
We introduce Kinematify, an automated framework that synthesizes articulated objects directly from arbitrary RGB images or textual descriptions.<n>Our method addresses two core challenges: (i) inferring kinematic topologies for high-DoF objects and (ii) estimating joint parameters from static geometry.
arXiv Detail & Related papers (2025-11-03T07:21:42Z) - Delving into Dynamic Scene Cue-Consistency for Robust 3D Multi-Object Tracking [16.366398265001422]
3D multi-object tracking is a critical and challenging task in the field of autonomous driving.<n>We introduce the Dynamic Scene Cue-Consistency Tracker (DSC-Track) to implement this principle.
arXiv Detail & Related papers (2025-08-15T08:48:13Z) - BoxDreamer: Dreaming Box Corners for Generalizable Object Pose Estimation [58.14071520415005]
This paper presents a general RGB-based approach for object pose estimation, specifically designed to address challenges in sparse-view settings.
To overcome these limitations, we introduce corner points of the object bounding box as an intermediate representation of the object pose.
The 3D object corners can be reliably recovered from sparse input views, while the 2D corner points in the target view are estimated through a novel reference-based point datasets.
arXiv Detail & Related papers (2025-04-10T17:58:35Z) - Street Gaussians without 3D Object Tracker [86.62329193275916]
Existing methods rely on labor-intensive manual labeling of object poses to reconstruct dynamic objects in canonical space.
We propose a stable object tracking module by leveraging associations from 2D deep trackers within a 3D object fusion strategy.
We address inevitable tracking errors by further introducing a motion learning strategy in an implicit feature space that autonomously corrects trajectory errors and recovers missed detections.
arXiv Detail & Related papers (2024-12-07T05:49:42Z) - Articulated Object Manipulation using Online Axis Estimation with SAM2-Based Tracking [59.87033229815062]
Articulated object manipulation requires precise object interaction, where the object's axis must be carefully considered.
Previous research employed interactive perception for manipulating articulated objects, but typically, open-loop approaches often suffer from overlooking the interaction dynamics.
We present a closed-loop pipeline integrating interactive perception with online axis estimation from segmented 3D point clouds.
arXiv Detail & Related papers (2024-09-24T17:59:56Z) - Dynamic Position Transformation and Boundary Refinement Network for Left Atrial Segmentation [17.09918110723713]
Left atrial (LA) segmentation is a crucial technique for irregular heartbeat (i.e., atrial fibrillation) diagnosis.
Most current methods for LA segmentation strictly assume that the input data is acquired using object-oriented center cropping.
We propose a novel Dynamic Position transformation and Boundary refinement Network (DPBNet) to tackle these issues.
arXiv Detail & Related papers (2024-07-07T22:09:35Z) - Learning Extrinsic Dexterity with Parameterized Manipulation Primitives [8.7221770019454]
We learn a sequence of actions that utilize the environment to change the object's pose.
Our approach can control the object's state through exploiting interactions between the object, the gripper, and the environment.
We evaluate our approach on picking box-shaped objects of various weight, shape, and friction properties from a constrained table-top workspace.
arXiv Detail & Related papers (2023-10-26T21:28:23Z) - DeepSimHO: Stable Pose Estimation for Hand-Object Interaction via
Physics Simulation [81.11585774044848]
We present DeepSimHO, a novel deep-learning pipeline that combines forward physics simulation and backward gradient approximation with a neural network.
Our method noticeably improves the stability of the estimation and achieves superior efficiency over test-time optimization.
arXiv Detail & Related papers (2023-10-11T05:34:36Z) - Decaf: Monocular Deformation Capture for Face and Hand Interactions [77.75726740605748]
This paper introduces the first method that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos.
We model hands as articulated objects inducing non-rigid face deformations during an active interaction.
Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system.
arXiv Detail & Related papers (2023-09-28T17:59:51Z) - Planning with Spatial-Temporal Abstraction from Point Clouds for
Deformable Object Manipulation [64.00292856805865]
We propose PlAnning with Spatial-Temporal Abstraction (PASTA), which incorporates both spatial abstraction and temporal abstraction.
Our framework maps high-dimension 3D observations into a set of latent vectors and plans over skill sequences on top of the latent set representation.
We show that our method can effectively perform challenging deformable object manipulation tasks in the real world.
arXiv Detail & Related papers (2022-10-27T19:57:04Z) - Deformable One-Dimensional Object Detection for Routing and Manipulation [8.860083597706502]
This paper proposes an approach for detecting deformable one-dimensional objects which can handle crossings and occlusions.
Our algorithm takes an image containing a deformable object and outputs a chain of fixed-length cylindrical segments connected with passive spherical joints.
Our tests and experiments have shown that the method can correctly detect deformable one-dimensional objects in various complex conditions.
arXiv Detail & Related papers (2022-01-18T07:19:17Z) - A Bayesian Treatment of Real-to-Sim for Deformable Object Manipulation [59.29922697476789]
We propose a novel methodology for extracting state information from image sequences via a technique to represent the state of a deformable object as a distribution embedding.
Our experiments confirm that we can estimate posterior distributions of physical properties, such as elasticity, friction and scale of highly deformable objects, such as cloth and ropes.
arXiv Detail & Related papers (2021-12-09T17:50:54Z) - Localization and Tracking of User-Defined Points on Deformable Objects
for Robotic Manipulation [0.0]
This paper introduces an efficient procedure to localize user-defined points on the surface of deformable objects.
We propose a discretized deformation field, which is estimated during runtime using a multi-step non-linear solver pipeline.
Our approach is capable of solving the localization problem online in a data-parallel manner, making it ideally suitable for the perception of non-rigid objects in industrial manufacturing processes.
arXiv Detail & Related papers (2021-05-19T11:25:33Z) - Scalable Differentiable Physics for Learning and Control [99.4302215142673]
Differentiable physics is a powerful approach to learning and control problems that involve physical objects and environments.
We develop a scalable framework for differentiable physics that can support a large number of objects and their interactions.
arXiv Detail & Related papers (2020-07-04T19:07:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.