Learning to Control Complex Robots Using High-Dimensional Interfaces:
Preliminary Insights
- URL: http://arxiv.org/abs/2110.04663v1
- Date: Sat, 9 Oct 2021 23:38:22 GMT
- Title: Learning to Control Complex Robots Using High-Dimensional Interfaces:
Preliminary Insights
- Authors: Jongmin M. Lee, Temesgen Gebrekristos, Dalia De Santis, Mahdieh
Nejati-Javaremi, Deepak Gopinath, Biraj Parikh, Ferdinando A. Mussa-Ivaldi,
Brenna D. Argall
- Abstract summary: We explore the use of limited upper-body motions, captured via motion sensors, as inputs to control a 7 degree-of-freedom robotic arm.
It is possible that even dense sensor signals lack the salient information and independence necessary for reliable high-dimensional robot control.
- Score: 22.719193009150867
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Human body motions can be captured as a high-dimensional continuous signal
using motion sensor technologies. The resulting data can be surprisingly rich
in information, even when captured from persons with limited mobility. In this
work, we explore the use of limited upper-body motions, captured via motion
sensors, as inputs to control a 7 degree-of-freedom assistive robotic arm. It
is possible that even dense sensor signals lack the salient information and
independence necessary for reliable high-dimensional robot control. As the
human learns over time in the context of this limitation, intelligence on the
robot can be leveraged to better identify key learning challenges, provide
useful feedback, and support individuals until the challenges are managed. In
this short paper, we examine two uninjured participants' data from an ongoing
study, to extract preliminary results and share insights. We observe
opportunities for robot intelligence to step in, including the identification
of inconsistencies in time spent across all control dimensions, asymmetries in
individual control dimensions, and user progress in learning. Machine reasoning
about these situations may facilitate novel interface learning in the future.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Teaching Unknown Objects by Leveraging Human Gaze and Augmented Reality
in Human-Robot Interaction [3.1473798197405953]
This dissertation aims to teach a robot unknown objects in the context of Human-Robot Interaction (HRI)
The combination of eye tracking and Augmented Reality created a powerful synergy that empowered the human teacher to communicate with the robot.
The robot's object detection capabilities exhibited comparable performance to state-of-the-art object detectors trained on extensive datasets.
arXiv Detail & Related papers (2023-12-12T11:34:43Z) - Bridging Active Exploration and Uncertainty-Aware Deployment Using
Probabilistic Ensemble Neural Network Dynamics [11.946807588018595]
This paper presents a unified model-based reinforcement learning framework that bridges active exploration and uncertainty-aware deployment.
The two opposing tasks of exploration and deployment are optimized through state-of-the-art sampling-based MPC.
We conduct experiments on both autonomous vehicles and wheeled robots, showing promising results for both exploration and deployment.
arXiv Detail & Related papers (2023-05-20T17:20:12Z) - BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning [108.41464483878683]
We study the problem of enabling a vision-based robotic manipulation system to generalize to novel tasks.
We develop an interactive and flexible imitation learning system that can learn from both demonstrations and interventions.
When scaling data collection on a real robot to more than 100 distinct tasks, we find that this system can perform 24 unseen manipulation tasks with an average success rate of 44%.
arXiv Detail & Related papers (2022-02-04T07:30:48Z) - Dual-Arm Adversarial Robot Learning [0.6091702876917281]
We propose dual-arm settings as platforms for robot learning.
We will discuss the potential benefits of this setup as well as the challenges and research directions that can be pursued.
arXiv Detail & Related papers (2021-10-15T12:51:57Z) - What Matters in Learning from Offline Human Demonstrations for Robot
Manipulation [64.43440450794495]
We conduct an extensive study of six offline learning algorithms for robot manipulation.
Our study analyzes the most critical challenges when learning from offline human data.
We highlight opportunities for learning from human datasets.
arXiv Detail & Related papers (2021-08-06T20:48:30Z) - Low Dimensional State Representation Learning with Robotics Priors in
Continuous Action Spaces [8.692025477306212]
Reinforcement learning algorithms have proven to be capable of solving complicated robotics tasks in an end-to-end fashion.
We propose a framework combining the learning of a low-dimensional state representation, from high-dimensional observations coming from the robot's raw sensory readings, with the learning of the optimal policy.
arXiv Detail & Related papers (2021-07-04T15:42:01Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - The Ingredients of Real-World Robotic Reinforcement Learning [71.92831985295163]
We discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.
We propose a particular instantiation of such a system, using dexterous manipulation as our case study.
We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand.
arXiv Detail & Related papers (2020-04-27T03:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.