e-Inu: Simulating A Quadruped Robot With Emotional Sentience
- URL: http://arxiv.org/abs/2301.00964v1
- Date: Tue, 3 Jan 2023 06:28:45 GMT
- Title: e-Inu: Simulating A Quadruped Robot With Emotional Sentience
- Authors: Abhiruph Chakravarty, Jatin Karthik Tripathy, Sibi Chakkaravarthy S,
Aswani Kumar Cherukuri, S. Anitha, Firuz Kamalov, Annapurna Jonnalagadda
- Abstract summary: This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions.
We use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions.
The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%.
- Score: 4.15623340386296
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Quadruped robots are currently used in industrial robotics as mechanical aid
to automate several routine tasks. However, presently, the usage of such a
robot in a domestic setting is still very much a part of the research. This
paper discusses the understanding and virtual simulation of such a robot
capable of detecting and understanding human emotions, generating its gait, and
responding via sounds and expression on a screen. To this end, we use a
combination of reinforcement learning and software engineering concepts to
simulate a quadruped robot that can understand emotions, navigate through
various terrains and detect sound sources, and respond to emotions using
audio-visual feedback. This paper aims to establish the framework of simulating
a quadruped robot that is emotionally intelligent and can primarily respond to
audio-visual stimuli using motor or audio response. The emotion detection from
the speech was not as performant as ERANNs or Zeta Policy learning, still
managing an accuracy of 63.5%. The video emotion detection system produced
results that are almost at par with the state of the art, with an accuracy of
99.66%. Due to its "on-policy" learning process, the PPO algorithm was
extremely rapid to learn, allowing the simulated dog to demonstrate a
remarkably seamless gait across the different cadences and variations. This
enabled the quadruped robot to respond to generated stimuli, allowing us to
conclude that it functions as predicted and satisfies the aim of this work.
Related papers
- RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - A Human-Robot Mutual Learning System with Affect-Grounded Language
Acquisition and Differential Outcomes Training [0.1812164955222814]
The paper presents a novel human-robot interaction setup for identifying robot homeostatic needs.
We adopted a differential outcomes training protocol whereby the robot provides feedback specific to its internal needs.
We found evidence that DOT can enhance the human's learning efficiency, which in turn enables more efficient robot language acquisition.
arXiv Detail & Related papers (2023-10-20T09:41:31Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Knowledge-Driven Robot Program Synthesis from Human VR Demonstrations [16.321053835017942]
We present a system for automatically generating executable robot control programs from human task demonstrations in virtual reality (VR)
We leverage common-sense knowledge and game engine-based physics to semantically interpret human VR demonstrations.
We demonstrate our approach in the context of force-sensitive fetch-and-place for a robotic shopping assistant.
arXiv Detail & Related papers (2023-06-05T09:37:53Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Robot Sound Interpretation: Learning Visual-Audio Representations for
Voice-Controlled Robots [0.0]
We learn a representation that associates images and sound commands with minimal supervision.
Using this representation, we generate an intrinsic reward function to learn robotic tasks with reinforcement learning.
We show that our method outperforms previous work across various sound types and robotic tasks empirically.
arXiv Detail & Related papers (2021-09-07T02:26:54Z) - URoboSim -- An Episodic Simulation Framework for Prospective Reasoning
in Robotic Agents [18.869243389210492]
URoboSim is a robot simulator that allows robots to perform tasks as mental simulation before performing this task in reality.
We show the capabilities of URoboSim in form of mental simulations, generating data for machine learning and the usage as belief state for a real robot.
arXiv Detail & Related papers (2020-12-08T14:23:24Z) - Self-supervised reinforcement learning for speaker localisation with the
iCub humanoid robot [58.2026611111328]
Looking at a person's face is one of the mechanisms that humans rely on when it comes to filtering speech in noisy environments.
Having a robot that can look toward a speaker could benefit ASR performance in challenging environments.
We propose a self-supervised reinforcement learning-based framework inspired by the early development of humans.
arXiv Detail & Related papers (2020-11-12T18:02:15Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z) - ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for
Socially-Aware Robot Navigation [65.11858854040543]
We present ProxEmo, a novel end-to-end emotion prediction algorithm for robot navigation among pedestrians.
Our approach predicts the perceived emotions of a pedestrian from walking gaits, which is then used for emotion-guided navigation.
arXiv Detail & Related papers (2020-03-02T17:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.