Exploratory Experiments on Programming Autonomous Robots in Jadescript
- URL: http://arxiv.org/abs/2007.11741v1
- Date: Thu, 23 Jul 2020 01:31:46 GMT
- Title: Exploratory Experiments on Programming Autonomous Robots in Jadescript
- Authors: Eleonora Iotti, Giuseppe Petrosino, Stefania Monica, Federico Bergenti
- Abstract summary: This paper describes experiments to validate the possibility of programming autonomous robots using an agent-oriented programming language.
The agent-oriented programming paradigm is relevant because it offers language-level abstractions to process events and to command actuators.
A recent agent-oriented programming language called Jadescript is presented in this paper together with its new features specifically designed to handle events.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes exploratory experiments to validate the possibility of
programming autonomous robots using an agent-oriented programming language.
Proper perception of the environment, by means of various types of sensors, and
timely reaction to external events, by means of effective actuators, are
essential to provide robots with a sufficient level of autonomy. The
agent-oriented programming paradigm is relevant with this respect because it
offers language-level abstractions to process events and to command actuators.
A recent agent-oriented programming language called Jadescript is presented in
this paper together with its new features specifically designed to handle
events. Exploratory experiments on a simple case-study application are
presented to show the validity of the proposed approach and to exemplify the
use of the language to program autonomous robots.
Related papers
- TalkWithMachines: Enhancing Human-Robot Interaction for Interpretable Industrial Robotics Through Large/Vision Language Models [1.534667887016089]
The presented paper investigates recent advancements in Large Language Models (LLMs) and Vision Language Models (VLMs)
This integration allows robots to understand and execute commands given in natural language and to perceive their environment through visual and/or descriptive inputs.
Our paper outlines four LLM-assisted simulated robotic control, which explore (i) low-level control, (ii) the generation of language-based feedback that describes the robot's internal states, (iii) the use of visual information as additional input, and (iv) the use of robot structure information for generating task plans and feedback.
arXiv Detail & Related papers (2024-12-19T23:43:40Z) - PROSKILL: A formal skill language for acting in robotics [0.0]
Acting is an important decisional function for autonomous robots.
We propose a new language to program the acting skills.
This language maps unequivocally into a formal model which can be used to check properties offline or execute the skills.
arXiv Detail & Related papers (2024-03-12T15:56:53Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Exploring Large Language Models to Facilitate Variable Autonomy for Human-Robot Teaming [4.779196219827508]
We introduce a novel framework for a GPT-powered multi-robot testbed environment, based on a Unity Virtual Reality (VR) setting.
This system allows users to interact with robot agents through natural language, each powered by individual GPT cores.
A user study with 12 participants explores the effectiveness of GPT-4 and, more importantly, user strategies when being given the opportunity to converse in natural language within a multi-robot environment.
arXiv Detail & Related papers (2023-12-12T12:26:48Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic
Control [140.48218261864153]
We study how vision-language models trained on Internet-scale data can be incorporated directly into end-to-end robotic control.
Our approach leads to performant robotic policies and enables RT-2 to obtain a range of emergent capabilities from Internet-scale training.
arXiv Detail & Related papers (2023-07-28T21:18:02Z) - Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions
with Large Language Model [63.66204449776262]
Instruct2Act is a framework that maps multi-modal instructions to sequential actions for robotic manipulation tasks.
Our approach is adjustable and flexible in accommodating various instruction modalities and input types.
Our zero-shot method outperformed many state-of-the-art learning-based policies in several tasks.
arXiv Detail & Related papers (2023-05-18T17:59:49Z) - ChatGPT for Robotics: Design Principles and Model Abilities [25.032064314822243]
We outline a strategy that combines design principles for prompt engineering and the creation of a high-level function library.
We focus our evaluations on the effectiveness of different prompt engineering techniques and dialog strategies towards the execution of various types of robotics tasks.
Our study encompasses a range of tasks within the robotics domain, from basic logical, geometrical, and mathematical reasoning all the way to complex domains such as aerial navigation, manipulation, and embodied agents.
arXiv Detail & Related papers (2023-02-20T06:39:06Z) - Learning Action-Effect Dynamics for Hypothetical Vision-Language
Reasoning Task [50.72283841720014]
We propose a novel learning strategy that can improve reasoning about the effects of actions.
We demonstrate the effectiveness of our proposed approach and discuss its advantages over previous baselines in terms of performance, data efficiency, and generalization capability.
arXiv Detail & Related papers (2022-12-07T05:41:58Z) - ProgPrompt: Generating Situated Robot Task Plans using Large Language
Models [68.57918965060787]
Large language models (LLMs) can be used to score potential next actions during task planning.
We present a programmatic LLM prompt structure that enables plan generation functional across situated environments.
arXiv Detail & Related papers (2022-09-22T20:29:49Z) - Translating Natural Language Instructions to Computer Programs for Robot
Manipulation [0.6629765271909505]
We propose translating the natural language instruction to a Python function which queries the scene by accessing the output of the object detector.
We show that the proposed method performs better than training a neural network to directly predict the robot actions.
arXiv Detail & Related papers (2020-12-26T07:57:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.