A Non-parametric Skill Representation with Soft Null Space Projectors
for Fast Generalization
- URL: http://arxiv.org/abs/2209.08522v1
- Date: Sun, 18 Sep 2022 10:04:59 GMT
- Title: A Non-parametric Skill Representation with Soft Null Space Projectors
for Fast Generalization
- Authors: Jo\~ao Silv\'erio and Yanlong Huang
- Abstract summary: We derive a non-parametric movement primitive that contains a null space projector.
We show that such formulation allows for fast and efficient motion generation with computational complexity O(n2) without involving matrix inversions.
For demonstrated skills with high-dimensional inputs we show that it permits on-the-fly adaptation as well.
- Score: 7.119677737397071
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the last two decades, the robotics community witnessed the emergence of
various motion representations that have been used extensively, particularly in
behavorial cloning, to compactly encode and generalize skills. Among these,
probabilistic approaches have earned a relevant place, owing to their encoding
of variations, correlations and adaptability to new task conditions. Modulating
such primitives, however, is often cumbersome due to the need for parameter
re-optimization which frequently entails computationally costly operations. In
this paper we derive a non-parametric movement primitive formulation that
contains a null space projector. We show that such formulation allows for fast
and efficient motion generation with computational complexity O(n2) without
involving matrix inversions, whose complexity is O(n3). This is achieved by
using the null space to track secondary targets, with a precision determined by
the training dataset. Using a 2D example associated with time input we show
that our non-parametric solution compares favourably with a state-of-the-art
parametric approach. For demonstrated skills with high-dimensional inputs we
show that it permits on-the-fly adaptation as well.
Related papers
- Generative modeling of time-dependent densities via optimal transport
and projection pursuit [3.069335774032178]
We propose a cheap alternative to popular deep learning algorithms for temporal modeling.
Our method is highly competitive compared with state-of-the-art solvers.
arXiv Detail & Related papers (2023-04-19T13:50:13Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Oracle-Preserving Latent Flows [58.720142291102135]
We develop a methodology for the simultaneous discovery of multiple nontrivial continuous symmetries across an entire labelled dataset.
The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function.
The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to transformations invariant with respect to high-dimensional oracles.
arXiv Detail & Related papers (2023-02-02T00:13:32Z) - Quantum Sparse Coding [5.130440339897477]
We develop a quantum-inspired algorithm for sparse coding.
The emergence of quantum computers and Ising machines can potentially lead to more accurate estimations.
We conduct numerical experiments with simulated data on Lightr's quantum-inspired digital platform.
arXiv Detail & Related papers (2022-09-08T13:00:30Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Toward Learning Robust and Invariant Representations with Alignment
Regularization and Data Augmentation [76.85274970052762]
This paper is motivated by a proliferation of options of alignment regularizations.
We evaluate the performances of several popular design choices along the dimensions of robustness and invariance.
We also formally analyze the behavior of alignment regularization to complement our empirical study under assumptions we consider realistic.
arXiv Detail & Related papers (2022-06-04T04:29:19Z) - Neural Implicit Flow: a mesh-agnostic dimensionality reduction paradigm
of spatio-temporal data [4.996878640124385]
We propose a general framework called Neural Implicit Flow (NIF) that enables a mesh-agnostic, low-rank representation of large-scale, parametric, spatialtemporal data.
NIF consists of two modified multilayer perceptrons (i) ShapeNet, which isolates and represents the spatial complexity (i) ShapeNet, which accounts for any other input measurements, including parametric dependencies, time, and sensor measurements.
We demonstrate the utility of NIF for parametric surrogate modeling, enabling the interpretable representation and compression of complex spatial-temporal dynamics, efficient many-spatial-temporal generalization, and improved performance for sparse
arXiv Detail & Related papers (2022-04-07T05:02:58Z) - Sketching as a Tool for Understanding and Accelerating Self-attention
for Long Sequences [52.6022911513076]
Transformer-based models are not efficient in processing long sequences due to the quadratic space and time complexity of the self-attention modules.
We propose Linformer and Informer to reduce the quadratic complexity to linear (modulo logarithmic factors) via low-dimensional projection and row selection.
Based on the theoretical analysis, we propose Skeinformer to accelerate self-attention and further improve the accuracy of matrix approximation to self-attention.
arXiv Detail & Related papers (2021-12-10T06:58:05Z) - Adaptive Machine Learning for Time-Varying Systems: Low Dimensional
Latent Space Tuning [91.3755431537592]
We present a recently developed method of adaptive machine learning for time-varying systems.
Our approach is to map very high (N>100k) dimensional inputs into the low dimensional (N2) latent space at the output of the encoder section of an encoder-decoder CNN.
This method allows us to learn correlations within and to track their evolution in real time based on feedback without interrupts.
arXiv Detail & Related papers (2021-07-13T16:05:28Z) - Mixed-Integer Nonlinear Programming for State-based Non-Intrusive Load
Monitoring [2.2237337682863125]
Non-Intrusive Load Monitoring (NILM) is the task of inferring the energy consumption of each appliance given the aggregate signal recorded by a single smart meter.
We propose a novel two-stage optimization-based approach for energy disaggregation.
arXiv Detail & Related papers (2021-06-16T22:16:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.