Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value
Functions
- URL: http://arxiv.org/abs/2206.14854v1
- Date: Wed, 29 Jun 2022 18:47:05 GMT
- Title: Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value
Functions
- Authors: Yun-Chun Chen, Adithyavairavan Murali, Balakumar Sundaralingam, Wei
Yang, Animesh Garg, Dieter Fox
- Abstract summary: We present Neural Motion Fields, a novel object representation which encodes both object point clouds and the relative task trajectories as an implicit value function parameterized by a neural network.
This object-centric representation models a continuous distribution over the SE(3) space and allows us to perform grasping reactively by leveraging sampling-based MPC to optimize this value function.
- Score: 65.84090965167535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The pipeline of current robotic pick-and-place methods typically consists of
several stages: grasp pose detection, finding inverse kinematic solutions for
the detected poses, planning a collision-free trajectory, and then executing
the open-loop trajectory to the grasp pose with a low-level tracking
controller. While these grasping methods have shown good performance on
grasping static objects on a table-top, the problem of grasping dynamic objects
in constrained environments remains an open problem. We present Neural Motion
Fields, a novel object representation which encodes both object point clouds
and the relative task trajectories as an implicit value function parameterized
by a neural network. This object-centric representation models a continuous
distribution over the SE(3) space and allows us to perform grasping reactively
by leveraging sampling-based MPC to optimize this value function.
Related papers
- Estimating Dynamic Flow Features in Groups of Tracked Objects [2.4344640336100936]
This work aims to extend gradient-based dynamical systems analyses to real-world applications characterized by complex, feature-rich image sequences with imperfect tracers.
The proposed approach is affordably implemented and enables advanced studies including the motion analysis of two distinct object classes in a single image sequence.
arXiv Detail & Related papers (2024-08-29T01:06:51Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - Degrees of Freedom Matter: Inferring Dynamics from Point Trajectories [28.701879490459675]
We aim to learn an implicit motion field parameterized by a neural network to predict the movement of novel points within same domain.
We exploit intrinsic regularization provided by SIREN, and modify the input layer to produce atemporally smooth motion field.
Our experiments assess the model's performance in predicting unseen point trajectories and its application in temporal mesh alignment with deformation.
arXiv Detail & Related papers (2024-06-05T21:02:10Z) - EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via
Self-Supervision [85.17951804790515]
EmerNeRF is a simple yet powerful approach for learning spatial-temporal representations of dynamic driving scenes.
It simultaneously captures scene geometry, appearance, motion, and semantics via self-bootstrapping.
Our method achieves state-of-the-art performance in sensor simulation.
arXiv Detail & Related papers (2023-11-03T17:59:55Z) - Forward Flow for Novel View Synthesis of Dynamic Scenes [97.97012116793964]
We propose a neural radiance field (NeRF) approach for novel view synthesis of dynamic scenes using forward warping.
Our method outperforms existing methods in both novel view rendering and motion modeling.
arXiv Detail & Related papers (2023-09-29T16:51:06Z) - Dyna-DepthFormer: Multi-frame Transformer for Self-Supervised Depth
Estimation in Dynamic Scenes [19.810725397641406]
We propose a novel Dyna-Depthformer framework, which predicts scene depth and 3D motion field jointly.
Our contributions are two-fold. First, we leverage multi-view correlation through a series of self- and cross-attention layers in order to obtain enhanced depth feature representation.
Second, we propose a warping-based Motion Network to estimate the motion field of dynamic objects without using semantic prior.
arXiv Detail & Related papers (2023-01-14T09:43:23Z) - ImpDet: Exploring Implicit Fields for 3D Object Detection [74.63774221984725]
We introduce a new perspective that views bounding box regression as an implicit function.
This leads to our proposed framework, termed Implicit Detection or ImpDet.
Our ImpDet assigns specific values to points in different local 3D spaces, thereby high-quality boundaries can be generated.
arXiv Detail & Related papers (2022-03-31T17:52:12Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z) - Any Motion Detector: Learning Class-agnostic Scene Dynamics from a
Sequence of LiDAR Point Clouds [4.640835690336654]
We propose a novel real-time approach of temporal context aggregation for motion detection and motion parameters estimation.
We introduce an ego-motion compensation layer to achieve real-time inference with performance comparable to a naive odometric transform of the original point cloud sequence.
arXiv Detail & Related papers (2020-04-24T10:40:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.