Reducing Latency in LLM-Based Natural Language Commands Processing for Robot Navigation
- URL: http://arxiv.org/abs/2506.00075v1
- Date: Thu, 29 May 2025 21:16:14 GMT
- Title: Reducing Latency in LLM-Based Natural Language Commands Processing for Robot Navigation
- Authors: Diego Pollini, Bruna V. Guterres, Rodrigo S. Guerra, Ricardo B. Grando,
- Abstract summary: This study explores the integration of the ChatGPT natural language model with the Robot Operating System 2 (ROS 2) to mitigate interaction latency.<n>We present an architecture that integrates these technologies without requiring a transport platform.<n> Experimental results demonstrate that this integration improves execution speed, usability, and accessibility of the human-robot interaction.
- Score: 0.2516577526761521
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The integration of Large Language Models (LLMs), such as GPT, in industrial robotics enhances operational efficiency and human-robot collaboration. However, the computational complexity and size of these models often provide latency problems in request and response times. This study explores the integration of the ChatGPT natural language model with the Robot Operating System 2 (ROS 2) to mitigate interaction latency and improve robotic system control within a simulated Gazebo environment. We present an architecture that integrates these technologies without requiring a middleware transport platform, detailing how a simulated mobile robot responds to text and voice commands. Experimental results demonstrate that this integration improves execution speed, usability, and accessibility of the human-robot interaction by decreasing the communication latency by 7.01\% on average. Such improvements facilitate smoother, real-time robot operations, which are crucial for industrial automation and precision tasks.
Related papers
- One For All: LLM-based Heterogeneous Mission Planning in Precision Agriculture [2.9440788521375585]
We present a natural language (NL) robotic mission planner that enables non-specialists to control heterogeneous robots.<n>Our architecture seamlessly translates human language into intermediate descriptions that can be executed by different robotic platforms.<n>This work represents a significant step toward making robotic automation in precision agriculture more accessible to non-technical users.
arXiv Detail & Related papers (2025-06-11T18:45:44Z) - Robot-R1: Reinforcement Learning for Enhanced Embodied Reasoning in Robotics [55.05920313034645]
We introduce Robot-R1, a novel framework that leverages reinforcement learning to enhance embodied reasoning specifically for robot control.<n>Inspired by the DeepSeek-R1 learning approach, Robot-R1 samples reasoning-based responses and reinforces those that lead to more accurate predictions.<n>Our experiments show that models trained with Robot-R1 outperform SFT methods on embodied reasoning tasks.
arXiv Detail & Related papers (2025-05-29T16:41:12Z) - Fault-Tolerant Multi-Robot Coordination with Limited Sensing within Confined Environments [0.6144680854063939]
We propose a novel fault-tolerance technique leveraging physical contact interactions in multi-robot systems.<n>We introduce the "Active Contact Response" (ACR) method, where each robot modulates its behavior based on the likelihood of encountering an inoperative (faulty) robot.
arXiv Detail & Related papers (2025-05-21T02:43:36Z) - Taccel: Scaling Up Vision-based Tactile Robotics via High-performance GPU Simulation [50.34179054785646]
We present Taccel, a high-performance simulation platform that integrates IPC and ABD to model robots, tactile sensors, and objects with both accuracy and unprecedented speed.<n>Taccel provides precise physics simulation and realistic tactile signals while supporting flexible robot-sensor configurations through user-friendly APIs.<n>These capabilities position Taccel as a powerful tool for scaling up tactile robotics research and development.
arXiv Detail & Related papers (2025-04-17T12:57:11Z) - HACTS: a Human-As-Copilot Teleoperation System for Robot Learning [47.9126187195398]
We introduce HACTS (Human-As-Copilot Teleoperation System), a novel system that establishes bilateral, real-time joint synchronization between a robot arm and teleoperation hardware.<n>This simple yet effective feedback mechanism, akin to a steering wheel in autonomous vehicles, enables the human copilot to intervene seamlessly while collecting action-correction data for future learning.
arXiv Detail & Related papers (2025-03-31T13:28:13Z) - Force-Based Robotic Imitation Learning: A Two-Phase Approach for Construction Assembly Tasks [2.6092377907704254]
This paper proposes a two-phase system to improve robot learning.<n>The first phase captures real-time data from operators using a robot arm linked with a virtual simulator via ROS-Sharp.<n>In the second phase, this feedback is converted into robotic motion instructions, using a generative approach to incorporate force feedback into the learning process.
arXiv Detail & Related papers (2025-01-24T22:01:23Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - Efficient Causal Discovery for Robotics Applications [2.1244188321694146]
We present a practical demonstration of our approach for fast and accurate causal analysis, known as Filtered PCMCI (F-PCMCI)
The provided application illustrates how our F-PCMCI can accurately and promptly reconstruct the causal model of a human-robot interaction scenario.
arXiv Detail & Related papers (2023-10-23T13:30:07Z) - WALL-E: Embodied Robotic WAiter Load Lifting with Large Language Model [92.90127398282209]
This paper investigates the potential of integrating the most recent Large Language Models (LLMs) and existing visual grounding and robotic grasping system.
We introduce the WALL-E (Embodied Robotic WAiter load lifting with Large Language model) as an example of this integration.
We deploy this LLM-empowered system on the physical robot to provide a more user-friendly interface for the instruction-guided grasping task.
arXiv Detail & Related papers (2023-08-30T11:35:21Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.