Interaction in Remote Peddling Using Avatar Robot by People with
Disabilities
- URL: http://arxiv.org/abs/2212.01030v1
- Date: Fri, 2 Dec 2022 08:55:51 GMT
- Title: Interaction in Remote Peddling Using Avatar Robot by People with
Disabilities
- Authors: Takashi Kanetsuna, Kazuaki Takeuchi, Hiroaki Kato, Taichi Sono,
Hirotaka Osawa, Kentaro Yoshifuji, Yoichi Yamazaki
- Abstract summary: We propose a mobile sales system using a mobile frozen drink machine and an avatar robot "OriHime", focusing on mobile customer service like peddling.
The effect of the peddling by the system on the customers are examined based on the results of video annotation.
- Score: 0.057725463942541105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Telework "avatar work," in which people with disabilities can engage in
physical work such as customer service, is being implemented in society. In
order to enable avatar work in a variety of occupations, we propose a mobile
sales system using a mobile frozen drink machine and an avatar robot "OriHime",
focusing on mobile customer service like peddling. The effect of the peddling
by the system on the customers are examined based on the results of video
annotation.
Related papers
- Zero-Cost Whole-Body Teleoperation for Mobile Manipulation [8.71539730969424]
MoMa-Teleop is a novel teleoperation method that delegates the base motions to a reinforcement learning agent.
We demonstrate that our approach results in a significant reduction in task completion time across a variety of robots and tasks.
arXiv Detail & Related papers (2024-09-23T15:09:45Z) - Open-TeleVision: Teleoperation with Immersive Active Visual Feedback [17.505318269362512]
Open-TeleVision allows operators to actively perceive the robot's surroundings in a stereoscopic manner.
The system mirrors the operator's arm and hand movements on the robot, creating an immersive experience.
We validate the effectiveness of our system by collecting data and training imitation learning policies on four long-horizon, precise tasks.
arXiv Detail & Related papers (2024-07-01T17:55:35Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - What Matters to You? Towards Visual Representation Alignment for Robot
Learning [81.30964736676103]
When operating in service of people, robots need to optimize rewards aligned with end-user preferences.
We propose Representation-Aligned Preference-based Learning (RAPL), a method for solving the visual representation alignment problem.
arXiv Detail & Related papers (2023-10-11T23:04:07Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - A ROS Architecture for Personalised HRI with a Bartender Social Robot [61.843727637976045]
BRILLO project has the overall goal of creating an autonomous robotic bartender that can interact with customers while accomplishing its bartending tasks.
We present the developed three-layers ROS architecture integrating a perception layer managing the processing of different social signals, a decision-making layer for handling multi-party interactions, and an execution layer controlling the behaviour of a complex robot composed of arms and a face.
arXiv Detail & Related papers (2022-03-13T11:33:06Z) - Scene Editing as Teleoperation: A Case Study in 6DoF Kit Assembly [18.563562557565483]
We propose the framework "Scene Editing as Teleoperation" (SEaT)
Instead of controlling the robot, users focus on specifying the task's goal.
A user can perform teleoperation without any expert knowledge of the robot hardware.
arXiv Detail & Related papers (2021-10-09T04:22:21Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Customized Handling of Unintended Interface Operation in Assistive
Robots [7.657378889055477]
We present an assistance system that reasons about a human's intended actions during robot teleoperation in order to provide appropriate corrections for unintended behavior.
We model the human's physical interaction with a control interface during robot teleoperation and distinguish between intended and measured physical actions explicitly.
arXiv Detail & Related papers (2020-07-04T13:23:22Z) - Avatar Work: Telework for Disabled People Unable to Go Outside by Using
Avatar Robots "OriHime-D" and Its Verification [0.0]
We propose a telework "avatar work" that enables people with disabilities to engage in physical works such as customer service.
In avatar work, disabled people can remotely engage in physical work by operating a proposed robot "OriHime-D" with a mouse or gaze input depending on their own disabilities.
arXiv Detail & Related papers (2020-03-25T12:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.