Shared Autonomy for Proximal Teaching
- URL: http://arxiv.org/abs/2502.19899v1
- Date: Thu, 27 Feb 2025 09:14:17 GMT
- Title: Shared Autonomy for Proximal Teaching
- Authors: Megha Srivastava, Reihaneh Iranmanesh, Yuchen Cui, Deepak Gopinath, Emily Sumner, Andrew Silva, Laporsha Dees, Guy Rosman, Dorsa Sadigh,
- Abstract summary: Motor skill learning often requires experienced professionals who can provide personalized instruction.<n>Z-COACH is a method for using shared autonomy to provide personalized instruction targeting interpretable task sub-skills.<n>In a user study we show that Z-COACH helps identify which skills each student should first practice, leading to an overall improvement in driving time, behavior, and smoothness.
- Score: 31.70561682131625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motor skill learning often requires experienced professionals who can provide personalized instruction. Unfortunately, the availability of high-quality training can be limited for specialized tasks, such as high performance racing. Several recent works have leveraged AI-assistance to improve instruction of tasks ranging from rehabilitation to surgical robot tele-operation. However, these works often make simplifying assumptions on the student learning process, and fail to model how a teacher's assistance interacts with different individuals' abilities when determining optimal teaching strategies. Inspired by the idea of scaffolding from educational psychology, we leverage shared autonomy, a framework for combining user inputs with robot autonomy, to aid with curriculum design. Our key insight is that the way a student's behavior improves in the presence of assistance from an autonomous agent can highlight which sub-skills might be most ``learnable'' for the student, or within their Zone of Proximal Development. We use this to design Z-COACH, a method for using shared autonomy to provide personalized instruction targeting interpretable task sub-skills. In a user study (n=50), where we teach high performance racing in a simulated environment of the Thunderhill Raceway Park with the CARLA Autonomous Driving simulator, we show that Z-COACH helps identify which skills each student should first practice, leading to an overall improvement in driving time, behavior, and smoothness. Our work shows that increasingly available semi-autonomous capabilities (e.g. in vehicles, robots) can not only assist human users, but also help *teach* them.
Related papers
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - SELFI: Autonomous Self-Improvement with Reinforcement Learning for Social Navigation [54.97931304488993]
Self-improving robots that interact and improve with experience are key to the real-world deployment of robotic systems.
We propose an online learning method, SELFI, that leverages online robot experience to rapidly fine-tune pre-trained control policies.
We report improvements in terms of collision avoidance, as well as more socially compliant behavior, measured by a human user study.
arXiv Detail & Related papers (2024-03-01T21:27:03Z) - Mimicking the Maestro: Exploring the Efficacy of a Virtual AI Teacher in
Fine Motor Skill Acquisition [3.07176124710244]
Motor skills, especially fine motor skills like handwriting, play an essential role in academic pursuits and everyday life.
Traditional methods to teach these skills, although effective, can be time-consuming and inconsistent.
We introduce an AI teacher model that captures the distinct characteristics of human instructors.
arXiv Detail & Related papers (2023-10-16T11:11:43Z) - Coaching a Teachable Student [10.81020059614133]
We propose a knowledge distillation framework for teaching a sensorimotor student agent to drive from the supervision of a privileged teacher agent.
Key insight is to design a student which learns to align their input features with the teacher's privileged Bird's Eye View (BEV) space.
To scaffold the difficult sensorimotor learning task, the student model is optimized via a student-paced coaching mechanism with various auxiliary supervision.
arXiv Detail & Related papers (2023-06-16T17:59:38Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Assistive Teaching of Motor Control Tasks to Humans [18.537539158464213]
We propose an AI-assisted teaching algorithm that breaks down any motor control task into teachable skills.
We show that assisted teaching with skills improves student performance by around 40% compared to practicing full trajectories without skills.
arXiv Detail & Related papers (2022-11-25T10:18:29Z) - ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically
Simulated Characters [123.88692739360457]
General-purpose motor skills enable humans to perform complex tasks.
These skills also provide powerful priors for guiding their behaviors when learning new tasks.
We present a framework for learning versatile and reusable skill embeddings for physically simulated characters.
arXiv Detail & Related papers (2022-05-04T06:13:28Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z) - Interaction-limited Inverse Reinforcement Learning [50.201765937436654]
We present two different training strategies: Curriculum Inverse Reinforcement Learning (CIRL) covering the teacher's perspective, and Self-Paced Inverse Reinforcement Learning (SPIRL) focusing on the learner's perspective.
Using experiments in simulations and experiments with a real robot learning a task from a human demonstrator, we show that our training strategies can allow a faster training than a random teacher for CIRL and than a batch learner for SPIRL.
arXiv Detail & Related papers (2020-07-01T12:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.