Learning Implicit Priors for Motion Optimization
- URL: http://arxiv.org/abs/2204.05369v1
- Date: Mon, 11 Apr 2022 19:14:54 GMT
- Title: Learning Implicit Priors for Motion Optimization
- Authors: Alexander Lambert, An T. Le, Julen Urain, Georgia Chalvatzaki, Byron
Boots, Jan Peters
- Abstract summary: Energy-based Models (EBM) represent expressive probability density distributions.
We present a set of required modeling and algorithmic choices to adapt EBMs into motion optimization.
- Score: 105.11889448885226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we focus on the problem of integrating Energy-based Models
(EBM) as guiding priors for motion optimization. EBMs are a set of neural
networks that can represent expressive probability density distributions in
terms of a Gibbs distribution parameterized by a suitable energy function. Due
to their implicit nature, they can easily be integrated as optimization factors
or as initial sampling distributions in the motion optimization problem, making
them good candidates to integrate data-driven priors in the motion optimization
problem. In this work, we present a set of required modeling and algorithmic
choices to adapt EBMs into motion optimization. We investigate the benefit of
including additional regularizers in the learning of the EBMs to use them with
gradient-based optimizers and we present a set of EBM architectures to learn
generalizable distributions for manipulation tasks. We present multiple cases
in which the EBM could be integrated for motion optimization and evaluate the
performance of learned EBMs as guiding priors for both simulated and real robot
experiments.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - GLHF: General Learned Evolutionary Algorithm Via Hyper Functions [16.391389860521134]
General pre-trained optimization model (GPOM) outperforms state-of-the-art evolutionary algorithms and pretrained optimization models (POMs)
GPOM exhibits robust generalization capabilities across diverse task distributions, dimensions, population sizes, and optimization horizons.
arXiv Detail & Related papers (2024-05-06T09:11:49Z) - Improving Energy Conserving Descent for Machine Learning: Theory and
Practice [0.0]
We develop the theory of Energy Con Descent (ECD) and introduce ECDSep, a gradient-based optimization algorithm able to tackle convex non-serving problems.
arXiv Detail & Related papers (2023-06-01T05:15:34Z) - Data-Driven Stochastic Motion Evaluation and Optimization with Image by
Spatially-Aligned Temporal Encoding [8.104557130048407]
This paper proposes a probabilistic motion prediction for long motions. The motion is predicted so that it accomplishes a task from the initial state observed in the given image.
Our method seamlessly integrates the image and motion data into the image feature domain by spatially-aligned temporal encoding.
The effectiveness of the proposed method is demonstrated with a variety of experiments with similar SOTA methods.
arXiv Detail & Related papers (2023-02-10T04:06:00Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - End-to-End Stochastic Optimization with Energy-Based Model [18.60842637575249]
Decision-focused learning (DFL) was recently proposed for objective optimization problems that involve unknown parameters.
We propose SO-EBM, a general and efficient DFL method for layer optimization using energy-based models.
arXiv Detail & Related papers (2022-11-25T00:14:12Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Semi-Empirical Objective Functions for MCMC Proposal Optimization [31.189518729816474]
We introduce and demonstrate a semi-empirical procedure for determining approximate objective functions suitable for optimizing arbitrarily parameterized proposal distributions.
We argue that Ab Initio objective functions are sufficiently robust to enable the confident optimization of MCMC proposal distributions parameterized by deep generative networks.
arXiv Detail & Related papers (2021-06-03T19:52:56Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.