DEVI: Open-source Human-Robot Interface for Interactive Receptionist
Systems
- URL: http://arxiv.org/abs/2101.00479v1
- Date: Sat, 2 Jan 2021 17:08:20 GMT
- Title: DEVI: Open-source Human-Robot Interface for Interactive Receptionist
Systems
- Authors: Ramesha Karunasena, Piumi Sandarenu, Madushi Pinto, Achala Athukorala,
Ranga Rodrigo, Peshala Jayasekara
- Abstract summary: "DEVI" is an open-source robot receptionist intelligence core.
This paper presents details on a prototype implementation of a physical robot using DEVI.
Experiments conducted with DEVI show the effectiveness of the proposed system.
- Score: 0.8972186395640678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humanoid robots that act as human-robot interfaces equipped with social
skills can assist people in many of their daily activities. Receptionist robots
are one such application where social skills and appearance are of utmost
importance. Many existing robot receptionist systems suffer from high cost and
they do not disclose internal architectures for further development for robot
researchers. Moreover, there does not exist customizable open-source robot
receptionist frameworks to be deployed for any given application. In this paper
we present an open-source robot receptionist intelligence core -- "DEVI"(means
'lady' in Sinhala), that provides researchers with ease of creating customized
robot receptionists according to the requirements (cost, external appearance,
and required processing power). Moreover, this paper also presents details on a
prototype implementation of a physical robot using the DEVI system. The robot
can give directional guidance with physical gestures, answer basic queries
using a speech recognition and synthesis system, recognize and greet known
people using face recognition and register new people in its database, using a
self-learning neural network. Experiments conducted with DEVI show the
effectiveness of the proposed system.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Exploring Large Language Models to Facilitate Variable Autonomy for Human-Robot Teaming [4.779196219827508]
We introduce a novel framework for a GPT-powered multi-robot testbed environment, based on a Unity Virtual Reality (VR) setting.
This system allows users to interact with robot agents through natural language, each powered by individual GPT cores.
A user study with 12 participants explores the effectiveness of GPT-4 and, more importantly, user strategies when being given the opportunity to converse in natural language within a multi-robot environment.
arXiv Detail & Related papers (2023-12-12T12:26:48Z) - Knowledge-Driven Robot Program Synthesis from Human VR Demonstrations [16.321053835017942]
We present a system for automatically generating executable robot control programs from human task demonstrations in virtual reality (VR)
We leverage common-sense knowledge and game engine-based physics to semantically interpret human VR demonstrations.
We demonstrate our approach in the context of force-sensitive fetch-and-place for a robotic shopping assistant.
arXiv Detail & Related papers (2023-06-05T09:37:53Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Body Gesture Recognition to Control a Social Robot [5.557794184787908]
We propose a gesture based language to allow humans to interact with robots using their body in a natural way.
We have created a new gesture detection model using neural networks and a custom dataset of humans performing a set of body gestures to train our network.
arXiv Detail & Related papers (2022-06-15T13:49:22Z) - HeRo 2.0: A Low-Cost Robot for Swarm Robotics Research [2.133433192530999]
This paper presents the design of a novel platform for swarm robotics applications that is low cost, easy to assemble using off-the-shelf components, and deeply integrated with the most used robotic framework: ROS (Robot Operating System)
The robotic platform is entirely open, composed of a 3D printed body and open-source software.
Results demonstrate that the proposed mobile robot is very effective given its small size and reduced cost, being suitable for swarm robotics research and education.
arXiv Detail & Related papers (2022-02-24T22:23:14Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Embedded Computer Vision System Applied to a Four-Legged Line Follower
Robot [0.0]
This project aims to drive a robot using an automated computer vision embedded system, connecting the robot's vision to its behavior.
The robot is applied on a typical mobile robot's issue: line following.
Decision making of where to move next is based on the line center of the path and is fully automated.
arXiv Detail & Related papers (2021-01-12T23:52:53Z) - OpenBot: Turning Smartphones into Robots [95.94432031144716]
Current robots are either expensive or make significant compromises on sensory richness, computational power, and communication capabilities.
We propose to leverage smartphones to equip robots with extensive sensor suites, powerful computational abilities, state-of-the-art communication channels, and access to a thriving software ecosystem.
We design a small electric vehicle that costs $50 and serves as a robot body for standard Android smartphones.
arXiv Detail & Related papers (2020-08-24T18:04:50Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.