Accessible Robot Control in Mixed Reality
- URL: http://arxiv.org/abs/2306.02393v1
- Date: Sun, 4 Jun 2023 16:05:26 GMT
- Title: Accessible Robot Control in Mixed Reality
- Authors: Ganlin Zhang, Deheng Zhang, Longteng Duan, Guo Han
- Abstract summary: This method is mainly designed for people with physical disabilities.
The eye gaze tracking and head motion tracking technologies of Hololens 2 are utilized for sending control commands.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A novel method to control the Spot robot of Boston Dynamics by Hololens 2 is
proposed. This method is mainly designed for people with physical disabilities,
users can control the robot's movement and robot arm without using their hands.
The eye gaze tracking and head motion tracking technologies of Hololens 2 are
utilized for sending control commands. The movement of the robot would follow
the eye gaze and the robot arm would mimic the pose of the user's head. Through
our experiment, our method is comparable with the traditional control method by
joystick in both time efficiency and user experience. Demo can be found on our
project webpage: https://zhangganlin.github.io/Holo-Spot-Page/index.html
Related papers
- Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - Pedipulate: Enabling Manipulation Skills using a Quadruped Robot's Leg [11.129918951736052]
Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios.
In this work, we explore pedipulation - using the legs of a legged robot for manipulation.
arXiv Detail & Related papers (2024-02-16T17:20:45Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control [2.832383052276894]
Seeing-eye robots are useful tools for guiding visually impaired people, potentially producing a huge societal impact.
None considered external tugs from humans, which frequently occur in a real guide dog setting.
We demonstrate our full seeing-eye robot system on a real quadruped robot with a blindfolded human.
arXiv Detail & Related papers (2023-09-08T15:02:46Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - AR2-D2:Training a Robot Without a Robot [53.10633639596096]
We introduce AR2-D2, a system for collecting demonstrations which does not require people with specialized training.
AR2-D2 is a framework in the form of an iOS app that people can use to record a video of themselves manipulating any object.
We show that data collected via our system enables the training of behavior cloning agents in manipulating real objects.
arXiv Detail & Related papers (2023-06-23T23:54:26Z) - GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots [87.32145104894754]
We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
arXiv Detail & Related papers (2022-09-12T15:14:32Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans
on Youtube [24.530131506065164]
We build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand.
The robot observes the human operator via a single RGB camera and imitates their actions in real-time.
We leverage this data to train a system that understands human hands and retargets a human video stream into a robot hand-arm trajectory that is smooth, swift, safe, and semantically similar to the guiding demonstration.
arXiv Detail & Related papers (2022-02-21T18:59:59Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.