Large Language Models for Human-Robot Interaction: Opportunities and Risks
- URL: http://arxiv.org/abs/2405.00693v1
- Date: Tue, 26 Mar 2024 15:36:40 GMT
- Title: Large Language Models for Human-Robot Interaction: Opportunities and Risks
- Authors: Jesse Atuhurra,
- Abstract summary: We present a meta-study about the potential of large language models if deployed in social robots.
We study how these language models could be safely trained to understand'' societal norms and issues.
We hope this study provides a resourceful guide to other robotics researchers interested in incorporating language models in their robots.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The tremendous development in large language models (LLM) has led to a new wave of innovations and applications and yielded research results that were initially forecast to take longer. In this work, we tap into these recent developments and present a meta-study about the potential of large language models if deployed in social robots. We place particular emphasis on the applications of social robots: education, healthcare, and entertainment. Before being deployed in social robots, we also study how these language models could be safely trained to ``understand'' societal norms and issues, such as trust, bias, ethics, cognition, and teamwork. We hope this study provides a resourceful guide to other robotics researchers interested in incorporating language models in their robots.
Related papers
- Toward General-Purpose Robots via Foundation Models: A Survey and
Meta-Analysis [73.89558418030418]
Most existing robotic systems have been designed for specific tasks, trained on specific datasets, and deployed within specific environments.
Motivated by the impressive open-set performance and content generation capabilities of web-scale, large-capacity pre-trained models, we devote this survey to exploring how foundation models can be applied to robotics.
arXiv Detail & Related papers (2023-12-14T10:02:55Z) - Exploring Large Language Models to Facilitate Variable Autonomy for Human-Robot Teaming [4.779196219827508]
We introduce a novel framework for a GPT-powered multi-robot testbed environment, based on a Unity Virtual Reality (VR) setting.
This system allows users to interact with robot agents through natural language, each powered by individual GPT cores.
A user study with 12 participants explores the effectiveness of GPT-4 and, more importantly, user strategies when being given the opportunity to converse in natural language within a multi-robot environment.
arXiv Detail & Related papers (2023-12-12T12:26:48Z) - Large Language Models for Robotics: A Survey [40.76581696885846]
Large language models (LLMs) possess the ability to process and generate natural language, facilitating efficient interaction and collaboration with robots.
This review aims to summarize the applications of LLMs in robotics, delving into their impact and contributions to key areas such as robot control, perception, decision-making, and path planning.
arXiv Detail & Related papers (2023-11-13T10:46:35Z) - Developing Social Robots with Empathetic Non-Verbal Cues Using Large
Language Models [2.5489046505746704]
We design and label four types of empathetic non-verbal cues, abbreviated as SAFE: Speech, Action (gesture), Facial expression, and Emotion, in a social robot.
Preliminary results show distinct patterns in the robot's responses, such as a preference for calm and positive social emotions like 'joy' and 'lively', and frequent nodding gestures.
Our work lays the groundwork for future studies on human-robot interactions, emphasizing the essential role of both verbal and non-verbal cues in creating social and empathetic robots.
arXiv Detail & Related papers (2023-08-31T08:20:04Z) - WALL-E: Embodied Robotic WAiter Load Lifting with Large Language Model [92.90127398282209]
This paper investigates the potential of integrating the most recent Large Language Models (LLMs) and existing visual grounding and robotic grasping system.
We introduce the WALL-E (Embodied Robotic WAiter load lifting with Large Language model) as an example of this integration.
We deploy this LLM-empowered system on the physical robot to provide a more user-friendly interface for the instruction-guided grasping task.
arXiv Detail & Related papers (2023-08-30T11:35:21Z) - The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling
Probabilistic Social Inferences from Linguistic Inputs [50.32802502923367]
We study the process of language driving and influencing social reasoning in a probabilistic goal inference domain.
We propose a neuro-symbolic model that carries out goal inference from linguistic inputs of agent scenarios.
Our model closely matches human response patterns and better predicts human judgements than using an LLM alone.
arXiv Detail & Related papers (2023-06-25T19:38:01Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - World Models and Predictive Coding for Cognitive and Developmental
Robotics: Frontiers and Challenges [51.92834011423463]
We focus on the two concepts of world models and predictive coding.
In neuroscience, predictive coding proposes that the brain continuously predicts its inputs and adapts to model its own dynamics and control behavior in its environment.
arXiv Detail & Related papers (2023-01-14T06:38:14Z) - Understanding Natural Language in Context [13.112390442564442]
We focus on cognitive robots, which have some knowledge-based models of the world and operate by reasoning and planning with this model.
Our goal in this research is to translate natural language utterances into this robot's formalism.
We do so by combining off-the-shelf SOTA language models, planning tools, and the robot's knowledge-base for better communication.
arXiv Detail & Related papers (2022-05-25T11:52:16Z) - Semantics for Robotic Mapping, Perception and Interaction: A Survey [93.93587844202534]
Study of understanding dictates what does the world "mean" to a robot.
With humans and robots increasingly operating in the same world, the prospects of human-robot interaction also bring semantics into the picture.
Driven by need, as well as by enablers like increasing availability of training data and computational resources, semantics is a rapidly growing research area in robotics.
arXiv Detail & Related papers (2021-01-02T12:34:39Z) - Spoken Language Interaction with Robots: Research Issues and
Recommendations, Report from the NSF Future Directions Workshop [0.819605661841562]
Meeting human needs requires addressing new challenges in speech technology and user experience design.
More powerful adaptation methods are needed, without extensive re-engineering or the collection of massive training data.
Since robots operate in real time, their speech and language processing components must also.
arXiv Detail & Related papers (2020-11-11T03:45:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.