Dynamics-Guided Diffusion Model for Robot Manipulator Design
- URL: http://arxiv.org/abs/2402.15038v1
- Date: Fri, 23 Feb 2024 01:19:30 GMT
- Title: Dynamics-Guided Diffusion Model for Robot Manipulator Design
- Authors: Xiaomeng Xu, Huy Ha, Shuran Song
- Abstract summary: We present a data-driven framework for generating manipulator geometry designs for a given manipulation task.
Instead of training different design models for each task, our approach employs a learned dynamics network shared across tasks.
- Score: 24.703003555261482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Dynamics-Guided Diffusion Model, a data-driven framework for
generating manipulator geometry designs for a given manipulation task. Instead
of training different design models for each task, our approach employs a
learned dynamics network shared across tasks. For a new manipulation task, we
first decompose it into a collection of individual motion targets which we call
target interaction profile, where each individual motion can be modeled by the
shared dynamics network. The design objective constructed from the target and
predicted interaction profiles provides a gradient to guide the refinement of
finger geometry for the task. This refinement process is executed as a
classifier-guided diffusion process, where the design objective acts as the
classifier guidance. We evaluate our framework on various manipulation tasks,
under the sensor-less setting using only an open-loop parallel jaw motion. Our
generated designs outperform optimization-based and unguided diffusion
baselines relatively by 31.5% and 45.3% on average manipulation success rate.
With the ability to generate a design within 0.8 seconds, our framework could
facilitate rapid design iteration and enhance the adoption of data-driven
approaches for robotic mechanism design.
Related papers
- ManiCM: Real-time 3D Diffusion Policy via Consistency Model for Robotic Manipulation [16.272352213590313]
Diffusion models have been verified to be effective in generating complex distributions from natural images to motion trajectories.
Recent methods show impressive performance in 3D robotic manipulation tasks, whereas they suffer from severe runtime inefficiency due to multiple denoising steps.
We propose a real-time robotic manipulation model named ManiCM that imposes the consistency constraint to the diffusion process.
arXiv Detail & Related papers (2024-06-03T17:59:23Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Learning visual-based deformable object rearrangement with local graph
neural networks [4.333220038316982]
We propose a novel representation strategy that can efficiently model the deformable object states with a set of keypoints and their interactions.
We also propose a light local GNN learning to jointly model the deformable rearrangement dynamics and infer the optimal manipulation actions.
Our method reaches much higher success rates on a variety of deformable rearrangement tasks (96.3% on average) than state-of-the-art method in simulation experiments.
arXiv Detail & Related papers (2023-10-16T11:42:54Z) - Deep Graph Reprogramming [112.34663053130073]
"Deep graph reprogramming" is a model reusing task tailored for graph neural networks (GNNs)
We propose an innovative Data Reprogramming paradigm alongside a Model Reprogramming paradigm.
arXiv Detail & Related papers (2023-04-28T02:04:29Z) - Unifying Flow, Stereo and Depth Estimation [121.54066319299261]
We present a unified formulation and model for three motion and 3D perception tasks.
We formulate all three tasks as a unified dense correspondence matching problem.
Our model naturally enables cross-task transfer since the model architecture and parameters are shared across tasks.
arXiv Detail & Related papers (2022-11-10T18:59:54Z) - Efficient Automatic Machine Learning via Design Graphs [72.85976749396745]
We propose FALCON, an efficient sample-based method to search for the optimal model design.
FALCON features 1) a task-agnostic module, which performs message passing on the design graph via a Graph Neural Network (GNN), and 2) a task-specific module, which conducts label propagation of the known model performance information.
We empirically show that FALCON can efficiently obtain the well-performing designs for each task using only 30 explored nodes.
arXiv Detail & Related papers (2022-10-21T21:25:59Z) - SE(3)-DiffusionFields: Learning smooth cost functions for joint grasp
and motion optimization through diffusion [34.25379651790627]
This work introduces a method for learning data-driven SE(3) cost functions as diffusion models.
We focus on learning SE(3) diffusion models for 6DoF grasping, giving rise to a novel framework for joint grasp and motion optimization.
arXiv Detail & Related papers (2022-09-08T14:50:23Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Physical Design using Differentiable Learned Simulators [9.380022457753938]
In inverse design, learned forward simulators are combined with gradient-based design optimization.
This framework produces high-quality designs by propagating through trajectories of hundreds of steps.
Our results suggest that despite some remaining challenges, machine learning-based simulators are maturing to the point where they can support general-purpose design optimization.
arXiv Detail & Related papers (2022-02-01T19:56:39Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.