Motivating Physical Activity via Competitive Human-Robot Interaction
- URL: http://arxiv.org/abs/2202.07068v1
- Date: Mon, 14 Feb 2022 22:19:58 GMT
- Title: Motivating Physical Activity via Competitive Human-Robot Interaction
- Authors: Boling Yang, Golnaz Habibi, Patrick E. Lancaster, Byron Boots, Joshua
R. Smith
- Abstract summary: This project aims to motivate research in competitive human-robot interaction by creating a robot competitor that can challenge human users in certain scenarios such as physical exercise and games.
We develop the robot competitor through iterative multi-agent reinforcement learning and show that it can perform well against human competitors.
- Score: 31.478167639618604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This project aims to motivate research in competitive human-robot interaction
by creating a robot competitor that can challenge human users in certain
scenarios such as physical exercise and games. With this goal in mind, we
introduce the Fencing Game, a human-robot competition used to evaluate both the
capabilities of the robot competitor and user experience. We develop the robot
competitor through iterative multi-agent reinforcement learning and show that
it can perform well against human competitors. Our user study additionally
found that our system was able to continuously create challenging and enjoyable
interactions that significantly increased human subjects' heart rates. The
majority of human subjects considered the system to be entertaining and
desirable for improving the quality of their exercise.
Related papers
- Human Reactions to Incorrect Answers from Robots [0.0]
The study systematically studies how trust dynamics and system design are affected by human responses to robot failures.
Results show that participants' trust in robotic technologies increased significantly when robots acknowledged their errors or limitations.
The study advances the science of human-robot interaction and promotes a wider adoption of robotic technologies.
arXiv Detail & Related papers (2024-03-21T11:00:11Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Stimulate the Potential of Robots via Competition [60.69068909395984]
We propose a competitive learning framework which is able to help individual robot to acquire knowledge from the competition.
Specifically, the competition information among competitors is introduced as the additional auxiliary signal to learn advantaged actions.
We further build a Multiagent-Race environment, and extensive experiments are conducted, demonstrating that robots trained in competitive environments outperform ones that are trained with SoTA algorithms in single robot environment.
arXiv Detail & Related papers (2024-03-15T17:21:39Z) - CoGrasp: 6-DoF Grasp Generation for Human-Robot Collaboration [0.0]
We propose a novel, deep neural network-based method called CoGrasp that generates human-aware robot grasps.
In real robot experiments, our method achieves about 88% success rate in producing stable grasps.
Our approach enables a safe, natural, and socially-aware human-robot objects' co-grasping experience.
arXiv Detail & Related papers (2022-10-06T19:23:25Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - A proxemics game between festival visitors and an industrial robot [1.2599533416395767]
Nonverbal behaviours of collaboration partners in human-robot teams influence the experience of the human interaction partners.
During the Ars Electronica 2020 Festival for Art, Technology and Society (Linz, Austria), we invited visitors to interact with an industrial robot.
We investigated general nonverbal behaviours of the humans interacting with the robot, as well as nonverbal behaviours of people in the audience.
arXiv Detail & Related papers (2021-05-28T13:26:00Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This study focuses on providing a novel learning mechanism based on a rivalry social impact.
Based on the concept of competitive rivalry, our analysis aims to investigate if we can change the assessment of these agents from a human perspective.
arXiv Detail & Related papers (2020-11-02T21:54:18Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.