In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning
- URL: http://arxiv.org/abs/2103.09402v1
- Date: Wed, 17 Mar 2021 02:11:58 GMT
- Title: In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning
- Authors: Kanata Suzuki, Momomi Kanamura, Yuki Suga, Hiroki Mori, Tetsuya Ogata
- Abstract summary: We report the successful execution of in-air knotting of rope using a dual-arm two-finger robot based on deep learning.
A manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance.
We constructed a model that instructed the robot to perform bowknots and overhand knots based on two deep neural networks trained using the data gathered from its sensorimotor.
- Score: 8.365690203298966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we report the successful execution of in-air knotting of rope
using a dual-arm two-finger robot based on deep learning. Owing to its
flexibility, the state of the rope was in constant flux during the operation of
the robot. This required the robot control system to dynamically correspond to
the state of the object at all times. However, a manual description of
appropriate robot motions corresponding to all object states is difficult to be
prepared in advance. To resolve this issue, we constructed a model that
instructed the robot to perform bowknots and overhand knots based on two deep
neural networks trained using the data gathered from its sensorimotor,
including visual and proximity sensors. The resultant model was verified to be
capable of predicting the appropriate robot motions based on the sensory
information available online. In addition, we designed certain task motions
based on the Ian knot method using the dual-arm two-fingers robot. The designed
knotting motions do not require a dedicated workbench or robot hand, thereby
enhancing the versatility of the proposed method. Finally, experiments were
performed to estimate the knotting performance of the real robot while
executing overhand knots and bowknots on rope and its success rate. The
experimental results established the effectiveness and high performance of the
proposed method.
Related papers
- Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Automated Gait Generation For Walking, Soft Robotic Quadrupeds [6.005998680766498]
Gait generation for soft robots is challenging due to the nonlinear dynamics and high dimensional input spaces of soft actuators.
We present a sample-efficient, simulation free, method for self-generating soft robot gaits.
This is the first demonstration of completely autonomous gait generation in a soft robot.
arXiv Detail & Related papers (2023-09-30T21:31:30Z) - AR2-D2:Training a Robot Without a Robot [53.10633639596096]
We introduce AR2-D2, a system for collecting demonstrations which does not require people with specialized training.
AR2-D2 is a framework in the form of an iOS app that people can use to record a video of themselves manipulating any object.
We show that data collected via our system enables the training of behavior cloning agents in manipulating real objects.
arXiv Detail & Related papers (2023-06-23T23:54:26Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - A Transferable Legged Mobile Manipulation Framework Based on Disturbance
Predictive Control [15.044159090957292]
Legged mobile manipulation, where a quadruped robot is equipped with a robotic arm, can greatly enhance the performance of the robot.
We propose a unified framework disturbance predictive control where a reinforcement learning scheme with a latent dynamic adapter is embedded into our proposed low-level controller.
arXiv Detail & Related papers (2022-03-02T14:54:10Z) - Transformer-based deep imitation learning for dual-arm robot
manipulation [5.3022775496405865]
In a dual-arm manipulation setup, the increased number of state dimensions caused by the additional robot manipulators causes distractions.
We address this issue using a self-attention mechanism that computes dependencies between elements in a sequential input and focuses on important elements.
A Transformer, a variant of self-attention architecture, is applied to deep imitation learning to solve dual-arm manipulation tasks in the real world.
arXiv Detail & Related papers (2021-08-01T07:42:39Z) - An analytical diabolo model for robotic learning and control [15.64227695210532]
We derive an analytical model of the diabolo-string system and compare its accuracy using data recorded via motion capture.
We show that our model outperforms a deep-learning-based predictor, both in terms of precision and physically consistent behavior.
We test our method on a real robot system by playing the diabolo, and throwing it to and catching it from a human player.
arXiv Detail & Related papers (2020-11-18T03:38:12Z) - Reinforcement Learning Experiments and Benchmark for Solving Robotic
Reaching Tasks [0.0]
Reinforcement learning has been successfully applied to solving the reaching task with robotic arms.
It is shown that augmenting the reward signal with the Hindsight Experience Replay exploration technique increases the average return of off-policy agents.
arXiv Detail & Related papers (2020-11-11T14:00:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.