SE(3)-DiffusionFields: Learning smooth cost functions for joint grasp
and motion optimization through diffusion
- URL: http://arxiv.org/abs/2209.03855v4
- Date: Sun, 18 Jun 2023 08:29:48 GMT
- Title: SE(3)-DiffusionFields: Learning smooth cost functions for joint grasp
and motion optimization through diffusion
- Authors: Julen Urain and Niklas Funk and Jan Peters and Georgia Chalvatzaki
- Abstract summary: This work introduces a method for learning data-driven SE(3) cost functions as diffusion models.
We focus on learning SE(3) diffusion models for 6DoF grasping, giving rise to a novel framework for joint grasp and motion optimization.
- Score: 34.25379651790627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-objective optimization problems are ubiquitous in robotics, e.g., the
optimization of a robot manipulation task requires a joint consideration of
grasp pose configurations, collisions and joint limits. While some demands can
be easily hand-designed, e.g., the smoothness of a trajectory, several
task-specific objectives need to be learned from data. This work introduces a
method for learning data-driven SE(3) cost functions as diffusion models.
Diffusion models can represent highly-expressive multimodal distributions and
exhibit proper gradients over the entire space due to their score-matching
training objective. Learning costs as diffusion models allows their seamless
integration with other costs into a single differentiable objective function,
enabling joint gradient-based motion optimization. In this work, we focus on
learning SE(3) diffusion models for 6DoF grasping, giving rise to a novel
framework for joint grasp and motion optimization without needing to decouple
grasp selection from trajectory generation. We evaluate the representation
power of our SE(3) diffusion models w.r.t. classical generative models, and we
showcase the superior performance of our proposed optimization framework in a
series of simulated and real-world robotic manipulation tasks against
representative baselines.
Related papers
- Diffusion Policies for Generative Modeling of Spacecraft Trajectories [1.2074552857379275]
A key shortcoming in current machine learning-based methods for trajectory generation is that they require large datasets.
In this work, we leverage compositional diffusion modeling to efficiently adapt out-of-distribution data.
We demonstrate the capability of compositional diffusion models for inference-time 6 DoF minimum-fuel landing site selection.
arXiv Detail & Related papers (2025-01-01T18:22:37Z) - Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Improving Efficiency of Diffusion Models via Multi-Stage Framework and Tailored Multi-Decoder Architectures [12.703947839247693]
Diffusion models, emerging as powerful deep generative tools, excel in various applications.
However, their remarkable generative performance is hindered by slow training and sampling.
This is due to the necessity of tracking extensive forward and reverse diffusion trajectories.
We present a multi-stage framework inspired by our empirical findings to tackle these challenges.
arXiv Detail & Related papers (2023-12-14T17:48:09Z) - Asynchronous Multi-Model Dynamic Federated Learning over Wireless
Networks: Theory, Modeling, and Optimization [20.741776617129208]
Federated learning (FL) has emerged as a key technique for distributed machine learning (ML)
We first formulate rectangular scheduling steps and functions to capture the impact of system parameters on learning performance.
Our analysis sheds light on the joint impact of device training variables and asynchronous scheduling decisions.
arXiv Detail & Related papers (2023-05-22T21:39:38Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.