PROSKILL: A formal skill language for acting in robotics
- URL: http://arxiv.org/abs/2403.07770v1
- Date: Tue, 12 Mar 2024 15:56:53 GMT
- Title: PROSKILL: A formal skill language for acting in robotics
- Authors: F\'elix Ingrand (LAAS-CNRS, Universit\'e de Toulouse, Toulouse,
France)
- Abstract summary: Acting is an important decisional function for autonomous robots.
We propose a new language to program the acting skills.
This language maps unequivocally into a formal model which can be used to check properties offline or execute the skills.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Acting is an important decisional function for autonomous robots. Acting
relies on skills to implement and to model the activities it oversees:
refinement, local recovery, temporal dispatching, external asynchronous events,
and commands execution, all done online. While sitting between planning and the
robotic platform, acting often relies on programming primitives and an
interpreter which executes these skills. Following our experience in providing
a formal framework to program the functional components of our robots, we
propose a new language, to program the acting skills. This language maps
unequivocally into a formal model which can then be used to check properties
offline or execute the skills, or more precisely their formal equivalent, and
perform runtime verification. We illustrate with a real example how we can
program a survey mission for a drone in this new language, prove some formal
properties on the program and directly execute the formal model on the drone to
perform the mission.
Related papers
- RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic
Control [140.48218261864153]
We study how vision-language models trained on Internet-scale data can be incorporated directly into end-to-end robotic control.
Our approach leads to performant robotic policies and enables RT-2 to obtain a range of emergent capabilities from Internet-scale training.
arXiv Detail & Related papers (2023-07-28T21:18:02Z) - Surfer: Progressive Reasoning with World Models for Robotic Manipulation [51.26109827779267]
We introduce a novel and simple robot manipulation framework, called Surfer.
Surfer treats robot manipulation as a state transfer of the visual scene, and decouples it into two parts: action and scene.
It is based on the world model, treats robot manipulation as a state transfer of the visual scene, and decouples it into two parts: action and scene.
arXiv Detail & Related papers (2023-06-20T07:06:04Z) - Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions
with Large Language Model [63.66204449776262]
Instruct2Act is a framework that maps multi-modal instructions to sequential actions for robotic manipulation tasks.
Our approach is adjustable and flexible in accommodating various instruction modalities and input types.
Our zero-shot method outperformed many state-of-the-art learning-based policies in several tasks.
arXiv Detail & Related papers (2023-05-18T17:59:49Z) - ProgPrompt: Generating Situated Robot Task Plans using Large Language
Models [68.57918965060787]
Large language models (LLMs) can be used to score potential next actions during task planning.
We present a programmatic LLM prompt structure that enables plan generation functional across situated environments.
arXiv Detail & Related papers (2022-09-22T20:29:49Z) - Learning Flexible Translation between Robot Actions and Language
Descriptions [16.538887534958555]
We propose a paired gated autoencoders (PGAE) for flexible translation between robot actions and language descriptions.
We train our model in an end-to-end fashion by pairing each action with appropriate descriptions that contain a signal informing about the translation direction.
With the option to use a pretrained language model as the language encoder, our model has the potential to recognise unseen natural language input.
arXiv Detail & Related papers (2022-07-15T12:37:05Z) - Do As I Can, Not As I Say: Grounding Language in Robotic Affordances [119.29555551279155]
Large language models can encode a wealth of semantic knowledge about the world.
Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language.
We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions.
arXiv Detail & Related papers (2022-04-04T17:57:11Z) - Summarizing a virtual robot's past actions in natural language [0.3553493344868413]
We show how a popular dataset that matches robot actions with natural language descriptions designed for an instruction following task can be repurposed to serve as a training ground for robot action summarization work.
We propose and test several methods of learning to generate such summaries, starting from either egocentric video frames of the robot taking actions or intermediate text representations of the actions used by an automatic planner.
arXiv Detail & Related papers (2022-03-13T15:00:46Z) - Learning Language-Conditioned Robot Behavior from Offline Data and
Crowd-Sourced Annotation [80.29069988090912]
We study the problem of learning a range of vision-based manipulation tasks from a large offline dataset of robot interaction.
We propose to leverage offline robot datasets with crowd-sourced natural language labels.
We find that our approach outperforms both goal-image specifications and language conditioned imitation techniques by more than 25%.
arXiv Detail & Related papers (2021-09-02T17:42:13Z) - Translating Natural Language Instructions to Computer Programs for Robot
Manipulation [0.6629765271909505]
We propose translating the natural language instruction to a Python function which queries the scene by accessing the output of the object detector.
We show that the proposed method performs better than training a neural network to directly predict the robot actions.
arXiv Detail & Related papers (2020-12-26T07:57:55Z) - Exploratory Experiments on Programming Autonomous Robots in Jadescript [0.0]
This paper describes experiments to validate the possibility of programming autonomous robots using an agent-oriented programming language.
The agent-oriented programming paradigm is relevant because it offers language-level abstractions to process events and to command actuators.
A recent agent-oriented programming language called Jadescript is presented in this paper together with its new features specifically designed to handle events.
arXiv Detail & Related papers (2020-07-23T01:31:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.