An analytical diabolo model for robotic learning and control
- URL: http://arxiv.org/abs/2011.09068v1
- Date: Wed, 18 Nov 2020 03:38:12 GMT
- Title: An analytical diabolo model for robotic learning and control
- Authors: Felix von Drigalski, Devwrat Joshi, Takayuki Murooka, Kazutoshi
Tanaka, Masashi Hamaya and Yoshihisa Ijiri
- Abstract summary: We derive an analytical model of the diabolo-string system and compare its accuracy using data recorded via motion capture.
We show that our model outperforms a deep-learning-based predictor, both in terms of precision and physically consistent behavior.
We test our method on a real robot system by playing the diabolo, and throwing it to and catching it from a human player.
- Score: 15.64227695210532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a diabolo model that can be used for training
agents in simulation to play diabolo, as well as running it on a real dual
robot arm system. We first derive an analytical model of the diabolo-string
system and compare its accuracy using data recorded via motion capture, which
we release as a public dataset of skilled play with diabolos of different
dynamics. We show that our model outperforms a deep-learning-based predictor,
both in terms of precision and physically consistent behavior. Next, we
describe a method based on optimal control to generate robot trajectories that
produce the desired diabolo trajectory, as well as a system to transform
higher-level actions into robot motions. Finally, we test our method on a real
robot system by playing the diabolo, and throwing it to and catching it from a
human player.
Related papers
- VidMan: Exploiting Implicit Dynamics from Video Diffusion Model for Effective Robot Manipulation [79.00294932026266]
VidMan is a novel framework that employs a two-stage training mechanism to enhance stability and improve data utilization efficiency.
Our framework outperforms state-of-the-art baseline model GR-1 on the CALVIN benchmark, achieving a 11.7% relative improvement, and demonstrates over 9% precision gains on the OXE small-scale dataset.
arXiv Detail & Related papers (2024-11-14T03:13:26Z) - Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction [52.12746368727368]
Differentiable simulation has become a powerful tool for system identification.
Our approach calibrates object properties by using information from the robot, without relying on data from the object itself.
We demonstrate the effectiveness of our method on a low-cost robotic platform.
arXiv Detail & Related papers (2024-10-04T20:48:38Z) - RoboPack: Learning Tactile-Informed Dynamics Models for Dense Packing [38.97168020979433]
We introduce an approach that combines visual and tactile sensing for robotic manipulation by learning a neural, tactile-informed dynamics model.
Our proposed framework, RoboPack, employs a recurrent graph neural network to estimate object states.
We demonstrate our approach on a real robot equipped with a compliant Soft-Bubble tactile sensor on non-prehensile manipulation and dense packing tasks.
arXiv Detail & Related papers (2024-07-01T16:08:37Z) - DiffGen: Robot Demonstration Generation via Differentiable Physics Simulation, Differentiable Rendering, and Vision-Language Model [72.66465487508556]
DiffGen is a novel framework that integrates differentiable physics simulation, differentiable rendering, and a vision-language model.
It can generate realistic robot demonstrations by minimizing the distance between the embedding of the language instruction and the embedding of the simulated observation.
Experiments demonstrate that with DiffGen, we could efficiently and effectively generate robot data with minimal human effort or training time.
arXiv Detail & Related papers (2024-05-12T15:38:17Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Autonomous Golf Putting with Data-Driven and Physics-Based Methods [0.0]
We are developing a self-learning mechatronic golf robot using combined data-driven and physics-based methods.
Apart from the mechatronic control design of the robot, this task is accomplished by a camera system with image recognition and a neural network.
We demonstrate the synergetic combination of data-driven and physics-based methods on the golf robot as a mechatronic example system.
arXiv Detail & Related papers (2022-11-15T12:05:03Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - RoboCraft: Learning to See, Simulate, and Shape Elasto-Plastic Objects
with Graph Networks [32.00371492516123]
We present a model-based planning framework for modeling and manipulating elasto-plastic objects.
Our system, RoboCraft, learns a particle-based dynamics model using graph neural networks (GNNs) to capture the structure of the underlying system.
We show through experiments that with just 10 minutes of real-world robotic interaction data, our robot can learn a dynamics model that can be used to synthesize control signals to deform elasto-plastic objects into various target shapes.
arXiv Detail & Related papers (2022-05-05T20:28:15Z) - In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning [8.365690203298966]
We report the successful execution of in-air knotting of rope using a dual-arm two-finger robot based on deep learning.
A manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance.
We constructed a model that instructed the robot to perform bowknots and overhand knots based on two deep neural networks trained using the data gathered from its sensorimotor.
arXiv Detail & Related papers (2021-03-17T02:11:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.