Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control
- URL: http://arxiv.org/abs/2309.04370v2
- Date: Thu, 12 Oct 2023 17:48:22 GMT
- Title: Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control
- Authors: David DeFazio, Eisuke Hirota, Shiqi Zhang
- Abstract summary: Seeing-eye robots are useful tools for guiding visually impaired people, potentially producing a huge societal impact.
None considered external tugs from humans, which frequently occur in a real guide dog setting.
We demonstrate our full seeing-eye robot system on a real quadruped robot with a blindfolded human.
- Score: 2.832383052276894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Seeing-eye robots are very useful tools for guiding visually impaired people,
potentially producing a huge societal impact given the low availability and
high cost of real guide dogs. Although a few seeing-eye robot systems have
already been demonstrated, none considered external tugs from humans, which
frequently occur in a real guide dog setting. In this paper, we simultaneously
train a locomotion controller that is robust to external tugging forces via
Reinforcement Learning (RL), and an external force estimator via supervised
learning. The controller ensures stable walking, and the force estimator
enables the robot to respond to the external forces from the human. These
forces are used to guide the robot to the global goal, which is unknown to the
robot, while the robot guides the human around nearby obstacles via a local
planner. Experimental results in simulation and on hardware show that our
controller is robust to external forces, and our seeing-eye system can
accurately detect force direction. We demonstrate our full seeing-eye robot
system on a real quadruped robot with a blindfolded human. The video can be
seen at our project page: https://bu-air-lab.github.io/guide_dog/
Related papers
- LLM Granularity for On-the-Fly Robot Control [3.5015824313818578]
In circumstances where visuals become unreliable or unavailable, can we rely solely on language to control robots?
This work takes the initial steps to answer this question by: 1) evaluating the responses of assistive robots to language prompts of varying granularities; and 2) exploring the necessity and feasibility of controlling the robot on-the-fly.
arXiv Detail & Related papers (2024-06-20T18:17:48Z) - Learning Visual Quadrupedal Loco-Manipulation from Demonstrations [36.1894630015056]
We aim to empower a quadruped robot to execute real-world manipulation tasks using only its legs.
We decompose the loco-manipulation process into a low-level reinforcement learning (RL)-based controller and a high-level Behavior Cloning (BC)-based planner.
Our approach is validated through simulations and real-world experiments, demonstrating the robot's ability to perform tasks that demand mobility and high precision.
arXiv Detail & Related papers (2024-03-29T17:59:05Z) - Pedipulate: Enabling Manipulation Skills using a Quadruped Robot's Leg [11.129918951736052]
Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios.
In this work, we explore pedipulation - using the legs of a legged robot for manipulation.
arXiv Detail & Related papers (2024-02-16T17:20:45Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Accessible Robot Control in Mixed Reality [0.0]
This method is mainly designed for people with physical disabilities.
The eye gaze tracking and head motion tracking technologies of Hololens 2 are utilized for sending control commands.
arXiv Detail & Related papers (2023-06-04T16:05:26Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots [87.32145104894754]
We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
arXiv Detail & Related papers (2022-09-12T15:14:32Z) - Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human
Supervision [72.4735163268491]
Commercial and industrial deployments of robot fleets often fall back on remote human teleoperators during execution.
We formalize the Interactive Fleet Learning (IFL) setting, in which multiple robots interactively query and learn from multiple human supervisors.
We propose Fleet-DAgger, a family of IFL algorithms, and compare a novel Fleet-DAgger algorithm to 4 baselines in simulation.
arXiv Detail & Related papers (2022-06-29T01:23:57Z) - Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans
on Youtube [24.530131506065164]
We build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand.
The robot observes the human operator via a single RGB camera and imitates their actions in real-time.
We leverage this data to train a system that understands human hands and retargets a human video stream into a robot hand-arm trajectory that is smooth, swift, safe, and semantically similar to the guiding demonstration.
arXiv Detail & Related papers (2022-02-21T18:59:59Z) - OpenBot: Turning Smartphones into Robots [95.94432031144716]
Current robots are either expensive or make significant compromises on sensory richness, computational power, and communication capabilities.
We propose to leverage smartphones to equip robots with extensive sensor suites, powerful computational abilities, state-of-the-art communication channels, and access to a thriving software ecosystem.
We design a small electric vehicle that costs $50 and serves as a robot body for standard Android smartphones.
arXiv Detail & Related papers (2020-08-24T18:04:50Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.