Avatar Work: Telework for Disabled People Unable to Go Outside by Using
Avatar Robots "OriHime-D" and Its Verification
- URL: http://arxiv.org/abs/2003.12569v1
- Date: Wed, 25 Mar 2020 12:44:47 GMT
- Title: Avatar Work: Telework for Disabled People Unable to Go Outside by Using
Avatar Robots "OriHime-D" and Its Verification
- Authors: Kazuaki Takeuchi, Yoichi Yamazaki, and Kentaro Yoshifuji
- Abstract summary: We propose a telework "avatar work" that enables people with disabilities to engage in physical works such as customer service.
In avatar work, disabled people can remotely engage in physical work by operating a proposed robot "OriHime-D" with a mouse or gaze input depending on their own disabilities.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we propose a telework "avatar work" that enables people with
disabilities to engage in physical works such as customer service in order to
realize an inclusive society, where we can do anything if we have free mind,
even though we are bedridden. In avatar work, disabled people can remotely
engage in physical work by operating a proposed robot "OriHime-D" with a mouse
or gaze input depending on their own disabilities. As a social implementation
initiative of avatar work, we have opened a two-week limited avatar robot cafe
and have evaluated remote employment by people with disabilities using
OriHime-D. As the results by 10 people with disabilities, we have confirmed
that the proposed avatar work leads to mental fulfillment for people with
disparities, and can be designed with adaptable workload. In addition, we have
confirmed that the work content of the experimental cafe is appropriate for
people with a variety of disabilities seeking social participation. This study
contributes to fulfillment all through life and lifetime working, and at the
same time leads to a solution to the employment shortage problem.
Related papers
- Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Unveiling Technorelief: Enhancing Neurodiverse Collaboration with Media
Capabilities [0.0]
The implications of collaboration on the cognitive, socio-affective experiences of autistic workers are poorly understood.
We ask how digital technologies alleviate autistic workers' experiences of their collaborative work environment.
The resulting "technorelief" enables autistic workers to tune into their perceptions and regain control of their collaborative experiences.
arXiv Detail & Related papers (2023-10-02T07:41:48Z) - Multi-Agent Deep Reinforcement Learning for Dynamic Avatar Migration in
AIoT-enabled Vehicular Metaverses with Trajectory Prediction [70.9337170201739]
We propose a model to predict the future trajectories of intelligent vehicles based on their historical data.
We show that our proposed algorithm can effectively reduce the latency of executing avatar tasks by around 25% without prediction.
arXiv Detail & Related papers (2023-06-26T13:27:11Z) - Design, Development, and Evaluation of an Interactive Personalized
Social Robot to Monitor and Coach Post-Stroke Rehabilitation Exercises [68.37238218842089]
We develop an interactive social robot exercise coaching system for personalized rehabilitation.
This system integrates a neural network model with a rule-based model to automatically monitor and assess patients' rehabilitation exercises.
Our system can adapt to new participants and achieved 0.81 average performance to assess their exercises, which is comparable to the experts' agreement level.
arXiv Detail & Related papers (2023-05-12T17:37:04Z) - The Work Avatar Face-Off: Knowledge Worker Preferences for Realism in
Meetings [0.0]
Our survey of 2509 knowledge workers from multiple countries rated five avatar styles for use by managers, known colleagues and unknown colleagues.
In all scenarios, participants favored higher realism, but fully realistic avatars were sometimes perceived as uncanny.
Less realistic avatars were rated worse when interacting with an unknown colleague or manager, as compared to a known colleague.
arXiv Detail & Related papers (2023-04-03T22:43:20Z) - Interaction in Remote Peddling Using Avatar Robot by People with
Disabilities [0.057725463942541105]
We propose a mobile sales system using a mobile frozen drink machine and an avatar robot "OriHime", focusing on mobile customer service like peddling.
The effect of the peddling by the system on the customers are examined based on the results of video annotation.
arXiv Detail & Related papers (2022-12-02T08:55:51Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Meta Avatar Robot Cafe: Linking Physical and Virtual Cybernetic Avatars
to Provide Physical Augmentation for People with Disabilities [1.4017836211792967]
We create a place where people with disabilities who have difficulty going out can freely switch between their physical bodies and virtual bodies, and communicate their presence and warmth to each other.
We create a place where people with disabilities who have difficulty going out can freely switch between their physical bodies and virtual bodies, and communicate their presence and warmth to each other.
arXiv Detail & Related papers (2022-07-18T09:58:07Z) - A trained humanoid robot can perform human-like crossmodal social
attention conflict resolution [13.059378830912912]
Our study adopted a neurorobotic paradigm of gaze-triggered audio-visual crossmodal integration to make an iCub robot express human-like social attention responses.
Masks were used to cover all facial visual cues other than the avatars' eyes.
We observed that the avatar's gaze could trigger crossmodal social attention with better human performance in the audio-visual congruent condition than in the incongruent condition.
arXiv Detail & Related papers (2021-11-02T21:49:52Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.