PARIS: Part-level Reconstruction and Motion Analysis for Articulated
Objects
- URL: http://arxiv.org/abs/2308.07391v1
- Date: Mon, 14 Aug 2023 18:18:00 GMT
- Title: PARIS: Part-level Reconstruction and Motion Analysis for Articulated
Objects
- Authors: Jiayi Liu, Ali Mahdavi-Amiri, Manolis Savva
- Abstract summary: We address the task of simultaneous part-level reconstruction and motion parameter estimation for articulated objects.
We present PARIS: a self-supervised, end-to-end architecture that learns part-level implicit shape and appearance models.
Our method generalizes better across object categories, and outperforms baselines and prior work that are given 3D point clouds as input.
- Score: 17.191728053966873
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the task of simultaneous part-level reconstruction and motion
parameter estimation for articulated objects. Given two sets of multi-view
images of an object in two static articulation states, we decouple the movable
part from the static part and reconstruct shape and appearance while predicting
the motion parameters. To tackle this problem, we present PARIS: a
self-supervised, end-to-end architecture that learns part-level implicit shape
and appearance models and optimizes motion parameters jointly without any 3D
supervision, motion, or semantic annotation. Our experiments show that our
method generalizes better across object categories, and outperforms baselines
and prior work that are given 3D point clouds as input. Our approach improves
reconstruction relative to state-of-the-art baselines with a Chamfer-L1
distance reduction of 3.94 (45.2%) for objects and 26.79 (84.5%) for parts, and
achieves 5% error rate for motion estimation across 10 object categories.
Video summary at: https://youtu.be/tDSrROPCgUc
Related papers
- LEIA: Latent View-invariant Embeddings for Implicit 3D Articulation [32.27869897947267]
We introduce LEIA, a novel approach for representing dynamic 3D objects.
Our method involves observing the object at distinct time steps or "states" and conditioning a hypernetwork on the current state.
By interpolating between these states, we can generate novel articulation configurations in 3D space that were previously unseen.
arXiv Detail & Related papers (2024-09-10T17:59:53Z) - Uncertainty-aware Active Learning of NeRF-based Object Models for Robot Manipulators using Visual and Re-orientation Actions [8.059133373836913]
This paper presents an approach that enables a robot to rapidly learn the complete 3D model of a given object for manipulation in unfamiliar orientations.
We use an ensemble of partially constructed NeRF models to quantify model uncertainty to determine the next action.
Our approach determines when and how to grasp and re-orient an object given its partial NeRF model and re-estimates the object pose to rectify misalignments introduced during the interaction.
arXiv Detail & Related papers (2024-04-02T10:15:06Z) - DO3D: Self-supervised Learning of Decomposed Object-aware 3D Motion and
Depth from Monocular Videos [76.01906393673897]
We propose a self-supervised method to jointly learn 3D motion and depth from monocular videos.
Our system contains a depth estimation module to predict depth, and a new decomposed object-wise 3D motion (DO3D) estimation module to predict ego-motion and 3D object motion.
Our model delivers superior performance in all evaluated settings.
arXiv Detail & Related papers (2024-03-09T12:22:46Z) - ROAM: Robust and Object-Aware Motion Generation Using Neural Pose
Descriptors [73.26004792375556]
This paper shows that robustness and generalisation to novel scene objects in 3D object-aware character synthesis can be achieved by training a motion model with as few as one reference object.
We leverage an implicit feature representation trained on object-only datasets, which encodes an SE(3)-equivariant descriptor field around the object.
We demonstrate substantial improvements in 3D virtual character motion and interaction quality and robustness to scenarios with unseen objects.
arXiv Detail & Related papers (2023-08-24T17:59:51Z) - DORT: Modeling Dynamic Objects in Recurrent for Multi-Camera 3D Object
Detection and Tracking [67.34803048690428]
We propose to model Dynamic Objects in RecurrenT (DORT) to tackle this problem.
DORT extracts object-wise local volumes for motion estimation that also alleviates the heavy computational burden.
It is flexible and practical that can be plugged into most camera-based 3D object detectors.
arXiv Detail & Related papers (2023-03-29T12:33:55Z) - Segmenting Moving Objects via an Object-Centric Layered Representation [100.26138772664811]
We introduce an object-centric segmentation model with a depth-ordered layer representation.
We introduce a scalable pipeline for generating synthetic training data with multiple objects.
We evaluate the model on standard video segmentation benchmarks.
arXiv Detail & Related papers (2022-07-05T17:59:43Z) - Unsupervised Kinematic Motion Detection for Part-segmented 3D Shape
Collections [14.899075941080541]
We present an unsupervised approach for discovering articulated motions in a part-segmented 3D shape collection.
Our approach is based on a concept we call category closure: any valid articulation of an object's parts should keep the object in the same semantic category.
We evaluate our approach by using it to re-discover part motions from the PartNet-Mobility dataset.
arXiv Detail & Related papers (2022-06-17T00:50:36Z) - Class-agnostic Reconstruction of Dynamic Objects from Videos [127.41336060616214]
We introduce REDO, a class-agnostic framework to REconstruct the Dynamic Objects from RGBD or calibrated videos.
We develop two novel modules. First, we introduce a canonical 4D implicit function which is pixel-aligned with aggregated temporal visual cues.
Second, we develop a 4D transformation module which captures object dynamics to support temporal propagation and aggregation.
arXiv Detail & Related papers (2021-12-03T18:57:47Z) - Learning to Segment Rigid Motions from Two Frames [72.14906744113125]
We propose a modular network, motivated by a geometric analysis of what independent object motions can be recovered from an egomotion field.
It takes two consecutive frames as input and predicts segmentation masks for the background and multiple rigidly moving objects, which are then parameterized by 3D rigid transformations.
Our method achieves state-of-the-art performance for rigid motion segmentation on KITTI and Sintel.
arXiv Detail & Related papers (2021-01-11T04:20:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.