From Scratch to Sketch: Deep Decoupled Hierarchical Reinforcement
Learning for Robotic Sketching Agent
- URL: http://arxiv.org/abs/2208.04833v1
- Date: Tue, 9 Aug 2022 15:18:55 GMT
- Title: From Scratch to Sketch: Deep Decoupled Hierarchical Reinforcement
Learning for Robotic Sketching Agent
- Authors: Ganghun Lee, Minji Kim, Minsu Lee, Byoung-Tak Zhang
- Abstract summary: We formulate the robotic sketching problem as a deep decoupled hierarchical reinforcement learning.
Two policies for stroke-based rendering and motor control are learned independently to achieve sub-tasks for drawing.
Our experimental results show that the two policies successfully learned the sub-tasks and collaborated to sketch the target images.
- Score: 20.406075470956065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an automated learning framework for a robotic sketching agent that
is capable of learning stroke-based rendering and motor control simultaneously.
We formulate the robotic sketching problem as a deep decoupled hierarchical
reinforcement learning; two policies for stroke-based rendering and motor
control are learned independently to achieve sub-tasks for drawing, and form a
hierarchy when cooperating for real-world drawing. Without hand-crafted
features, drawing sequences or trajectories, and inverse kinematics, the
proposed method trains the robotic sketching agent from scratch. We performed
experiments with a 6-DoF robot arm with 2F gripper to sketch doodles. Our
experimental results show that the two policies successfully learned the
sub-tasks and collaborated to sketch the target images. Also, the robustness
and flexibility were examined by varying drawing tools and surfaces.
Related papers
- Learning Orbitally Stable Systems for Diagrammatically Teaching [14.839036866911089]
Diagrammatic Teaching is a paradigm for robots to acquire novel skills, whereby the user provides 2D sketches over images of the scene to shape the robot's motion.
In this work, we tackle the problem of teaching a robot to approach a surface and then follow cyclic motion on it, where the cycle of the motion can be arbitrarily specified by a single user-provided sketch over an image from the robot's camera.
arXiv Detail & Related papers (2023-09-19T04:03:42Z) - Instructing Robots by Sketching: Learning from Demonstration via Probabilistic Diagrammatic Teaching [14.839036866911089]
Learning for Demonstration (LfD) enables robots to imitate expert demonstrations, allowing users to communicate their instructions in an intuitive manner.
Recent progress in LfD often relies on kinesthetic teaching or teleoperation as the medium for users to specify the demonstrations.
This paper introduces an alternative paradigm for LfD called Diagrammatic Teaching.
arXiv Detail & Related papers (2023-09-07T16:49:38Z) - Bridging the Gap: Fine-to-Coarse Sketch Interpolation Network for
High-Quality Animation Sketch Inbetweening [62.33071223229861]
Fine-to-Co-arse Interpolation Network (FC-SIN) is proposed to overcome sketch inbetweening issues.
FC-SIN incorporates multi-level guidance that formulates region-level correspondence, sketch-level correspondence and pixel-level dynamics.
We constructed a large-scale dataset - STD-12K, comprising 30 sketch animation series in diverse artistic styles.
arXiv Detail & Related papers (2023-08-25T09:51:03Z) - Silver-Bullet-3D at ManiSkill 2021: Learning-from-Demonstrations and
Heuristic Rule-based Methods for Object Manipulation [118.27432851053335]
This paper presents an overview and comparative analysis of our systems designed for the following two tracks in SAPIEN ManiSkill Challenge 2021: No Interaction Track.
The No Interaction track targets for learning policies from pre-collected demonstration trajectories.
In this track, we design a Heuristic Rule-based Method (HRM) to trigger high-quality object manipulation by decomposing the task into a series of sub-tasks.
For each sub-task, the simple rule-based controlling strategies are adopted to predict actions that can be applied to robotic arms.
arXiv Detail & Related papers (2022-06-13T16:20:42Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - Graph Neural Networks for Relational Inductive Bias in Vision-based Deep
Reinforcement Learning of Robot Control [0.0]
This work introduces a neural network architecture that combines relational inductive bias and visual feedback to learn an efficient position control policy.
We derive a graph representation that models the robot's internal state with a low-dimensional description of the visual scene generated by an image encoding network.
We show the ability of the model to improve sample efficiency for a 6-DoF robot arm in a visually realistic 3D environment.
arXiv Detail & Related papers (2022-03-11T15:11:54Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Making Robots Draw A Vivid Portrait In Two Minutes [11.148458054454407]
We present a drawing robot, which can automatically transfer a facial picture to a vivid portrait, and then draw it on paper within two minutes averagely.
At the heart of our system is a novel portrait synthesis algorithm based on deep learning.
The whole portrait drawing robotic system is named AiSketcher.
arXiv Detail & Related papers (2020-05-12T03:02:24Z) - SketchyCOCO: Image Generation from Freehand Scene Sketches [71.85577739612579]
We introduce the first method for automatic image generation from scene-level freehand sketches.
Key contribution is an attribute vector bridged Geneversarative Adrial Network called EdgeGAN.
We have built a large-scale composite dataset called SketchyCOCO to support and evaluate the solution.
arXiv Detail & Related papers (2020-03-05T14:54:10Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.