Using Knowledge Representation and Task Planning for Robot-agnostic
Skills on the Example of Contact-Rich Wiping Tasks
- URL: http://arxiv.org/abs/2308.14206v1
- Date: Sun, 27 Aug 2023 21:17:32 GMT
- Title: Using Knowledge Representation and Task Planning for Robot-agnostic
Skills on the Example of Contact-Rich Wiping Tasks
- Authors: Matthias Mayr, Faseeh Ahmad, Alexander Duerr, Volker Krueger
- Abstract summary: We show how a single robot skill that utilizes knowledge representation, task planning, and automatic selection of skill implementations can be executed in different contexts.
We demonstrate how the skill-based control platform enables this with contact-rich wiping tasks on different robot systems.
- Score: 44.99833362998488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The transition to agile manufacturing, Industry 4.0, and high-mix-low-volume
tasks require robot programming solutions that are flexible. However, most
deployed robot solutions are still statically programmed and use stiff position
control, which limit their usefulness. In this paper, we show how a single
robot skill that utilizes knowledge representation, task planning, and
automatic selection of skill implementations based on the input parameters can
be executed in different contexts. We demonstrate how the skill-based control
platform enables this with contact-rich wiping tasks on different robot
systems. To achieve that in this case study, our approach needs to address
different kinematics, gripper types, vendors, and fundamentally different
control interfaces. We conducted the experiments with a mobile platform that
has a Universal Robots UR5e 6 degree-of-freedom robot arm with position control
and a 7 degree-of-freedom KUKA iiwa with torque control.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - RAMPA: Robotic Augmented Reality for Machine Programming and Automation [4.963604518596734]
This paper introduces Robotic Augmented Reality for Machine Programming (RAMPA)
RAMPA is a system that utilizes the capabilities of state-of-the-art and commercially available AR headsets, e.g., Meta Quest 3.
Our approach enables in-situ data recording, visualization, and fine-tuning of skill demonstrations directly within the user's physical environment.
arXiv Detail & Related papers (2024-10-17T10:21:28Z) - Mastering Contact-rich Tasks by Combining Soft and Rigid Robotics with Imitation Learning [4.986982677009744]
Soft robots have the potential to revolutionize the use of robotic systems.
Traditional rigid robots offer high accuracy and repeatability but lack the flexibility of soft robots.
This work presents a novel hybrid robotic platform that integrates a rigid manipulator with a fully developed soft arm.
arXiv Detail & Related papers (2024-10-10T10:18:03Z) - Generalized Robot Learning Framework [10.03174544844559]
We present a low-cost robot learning framework that is both easily reproducible and transferable to various robots and environments.
We demonstrate that deployable imitation learning can be successfully applied even to industrial-grade robots.
arXiv Detail & Related papers (2024-09-18T15:34:31Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - QUAR-VLA: Vision-Language-Action Model for Quadruped Robots [37.952398683031895]
The central idea is to elevate the overall intelligence of the robot.
We propose QUAdruped Robotic Transformer (QUART), a family of VLA models to integrate visual information and instructions from diverse modalities as input.
Our approach leads to performant robotic policies and enables QUART to obtain a range of emergent capabilities.
arXiv Detail & Related papers (2023-12-22T06:15:03Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement
Learning [23.164743388342803]
We study how to solve bi-manual tasks using reinforcement learning trained in simulation.
We also discuss modifications to our simulated environment which lead to effective training of RL policies.
In this work, we design a Connect Task, where the aim is for two robot arms to pick up and attach two blocks with magnetic connection points.
arXiv Detail & Related papers (2022-03-15T21:49:20Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic
Platforms [60.59764170868101]
Reinforcement learning methods can achieve significant performance but require a large amount of training data collected on the same robotic platform.
We formulate it as a few-shot meta-learning problem where the goal is to find a model that captures the common structure shared across different robotic platforms.
We experimentally evaluate our framework on a simulated reaching and a real-robot picking task using 400 simulated robots.
arXiv Detail & Related papers (2021-03-05T14:16:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.