Self-Supervised Learning of Part Mobility from Point Cloud Sequence
- URL: http://arxiv.org/abs/2010.11735v2
- Date: Tue, 2 Mar 2021 09:34:11 GMT
- Title: Self-Supervised Learning of Part Mobility from Point Cloud Sequence
- Authors: Yahao Shi, Xinyu Cao and Bin Zhou
- Abstract summary: We introduce a self-supervised method for segmenting parts and predicting their motion attributes from a point sequence representing a dynamic object.
We generate trajectories by using correlations among successive frames of the sequence.
We evaluate our method on various tasks including motion part segmentation, motion axis prediction and motion range estimation.
- Score: 9.495859862104515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Part mobility analysis is a significant aspect required to achieve a
functional understanding of 3D objects. It would be natural to obtain part
mobility from the continuous part motion of 3D objects. In this study, we
introduce a self-supervised method for segmenting motion parts and predicting
their motion attributes from a point cloud sequence representing a dynamic
object. To sufficiently utilize spatiotemporal information from the point cloud
sequence, we generate trajectories by using correlations among successive
frames of the sequence instead of directly processing the point clouds. We
propose a novel neural network architecture called PointRNN to learn feature
representations of trajectories along with their part rigid motions. We
evaluate our method on various tasks including motion part segmentation, motion
axis prediction and motion range estimation. The results demonstrate that our
method outperforms previous techniques on both synthetic and real datasets.
Moreover, our method has the ability to generalize to new and unseen objects.
It is important to emphasize that it is not required to know any prior shape
structure, prior shape category information, or shape orientation. To the best
of our knowledge, this is the first study on deep learning to extract part
mobility from point cloud sequence of a dynamic object.
Related papers
- Articulated Object Manipulation using Online Axis Estimation with SAM2-Based Tracking [59.87033229815062]
Articulated object manipulation requires precise object interaction, where the object's axis must be carefully considered.
Previous research employed interactive perception for manipulating articulated objects, but typically, open-loop approaches often suffer from overlooking the interaction dynamics.
We present a closed-loop pipeline integrating interactive perception with online axis estimation from segmented 3D point clouds.
arXiv Detail & Related papers (2024-09-24T17:59:56Z) - AGAR: Attention Graph-RNN for Adaptative Motion Prediction of Point
Clouds of Deformable Objects [7.414594429329531]
We propose an improved architecture for point cloud prediction of deformable 3D objects.
Specifically, to handle deformable shapes, we propose a graph-based approach that learns and exploits the spatial structure of point clouds.
The proposed adaptative module controls the composition of local and global motions for each point, enabling the network to model complex motions in deformable 3D objects more effectively.
arXiv Detail & Related papers (2023-07-19T12:21:39Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - Semi-Weakly Supervised Object Kinematic Motion Prediction [56.282759127180306]
Given a 3D object, kinematic motion prediction aims to identify the mobile parts as well as the corresponding motion parameters.
We propose a graph neural network to learn the map between hierarchical part-level segmentation and mobile parts parameters.
The network predictions yield a large scale of 3D objects with pseudo labeled mobility information.
arXiv Detail & Related papers (2023-03-31T02:37:36Z) - Unsupervised Kinematic Motion Detection for Part-segmented 3D Shape
Collections [14.899075941080541]
We present an unsupervised approach for discovering articulated motions in a part-segmented 3D shape collection.
Our approach is based on a concept we call category closure: any valid articulation of an object's parts should keep the object in the same semantic category.
We evaluate our approach by using it to re-discover part motions from the PartNet-Mobility dataset.
arXiv Detail & Related papers (2022-06-17T00:50:36Z) - Exploring Optical-Flow-Guided Motion and Detection-Based Appearance for
Temporal Sentence Grounding [61.57847727651068]
Temporal sentence grounding aims to localize a target segment in an untrimmed video semantically according to a given sentence query.
Most previous works focus on learning frame-level features of each whole frame in the entire video, and directly match them with the textual information.
We propose a novel Motion- and Appearance-guided 3D Semantic Reasoning Network (MA3SRN), which incorporates optical-flow-guided motion-aware, detection-based appearance-aware, and 3D-aware object-level features.
arXiv Detail & Related papers (2022-03-06T13:57:09Z) - "What's This?" -- Learning to Segment Unknown Objects from Manipulation
Sequences [27.915309216800125]
We present a novel framework for self-supervised grasped object segmentation with a robotic manipulator.
We propose a single, end-to-end trainable architecture which jointly incorporates motion cues and semantic knowledge.
Our method neither depends on any visual registration of a kinematic robot or 3D object models, nor on precise hand-eye calibration or any additional sensor data.
arXiv Detail & Related papers (2020-11-06T10:55:28Z) - DyStaB: Unsupervised Object Segmentation via Dynamic-Static
Bootstrapping [72.84991726271024]
We describe an unsupervised method to detect and segment portions of images of live scenes that are seen moving as a coherent whole.
Our method first partitions the motion field by minimizing the mutual information between segments.
It uses the segments to learn object models that can be used for detection in a static image.
arXiv Detail & Related papers (2020-08-16T22:05:13Z) - AutoTrajectory: Label-free Trajectory Extraction and Prediction from
Videos using Dynamic Points [92.91569287889203]
We present a novel, label-free algorithm, AutoTrajectory, for trajectory extraction and prediction.
To better capture the moving objects in videos, we introduce dynamic points.
We aggregate dynamic points to instance points, which stand for moving objects such as pedestrians in videos.
arXiv Detail & Related papers (2020-07-11T08:43:34Z) - Any Motion Detector: Learning Class-agnostic Scene Dynamics from a
Sequence of LiDAR Point Clouds [4.640835690336654]
We propose a novel real-time approach of temporal context aggregation for motion detection and motion parameters estimation.
We introduce an ego-motion compensation layer to achieve real-time inference with performance comparable to a naive odometric transform of the original point cloud sequence.
arXiv Detail & Related papers (2020-04-24T10:40:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.