Towards Modeling and Influencing the Dynamics of Human Learning
- URL: http://arxiv.org/abs/2301.00901v1
- Date: Mon, 2 Jan 2023 23:59:45 GMT
- Title: Towards Modeling and Influencing the Dynamics of Human Learning
- Authors: Ran Tian, Masayoshi Tomizuka, Anca Dragan, and Andrea Bajcsy
- Abstract summary: We take a step towards enabling robots to understand the influence they have, leverage it to better assist people, and help human models more quickly align with reality.
Our key idea is to model the human's learning as a nonlinear dynamical system which evolves the human's internal model given new observations.
We then formalize how robots can influence human learning by embedding the human's learning dynamics model into the robot planning problem.
- Score: 26.961274302321343
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans have internal models of robots (like their physical capabilities), the
world (like what will happen next), and their tasks (like a preferred goal).
However, human internal models are not always perfect: for example, it is easy
to underestimate a robot's inertia. Nevertheless, these models change and
improve over time as humans gather more experience. Interestingly, robot
actions influence what this experience is, and therefore influence how people's
internal models change. In this work we take a step towards enabling robots to
understand the influence they have, leverage it to better assist people, and
help human models more quickly align with reality. Our key idea is to model the
human's learning as a nonlinear dynamical system which evolves the human's
internal model given new observations. We formulate a novel optimization
problem to infer the human's learning dynamics from demonstrations that
naturally exhibit human learning. We then formalize how robots can influence
human learning by embedding the human's learning dynamics model into the robot
planning problem. Although our formulations provide concrete problem
statements, they are intractable to solve in full generality. We contribute an
approximation that sacrifices the complexity of the human internal models we
can represent, but enables robots to learn the nonlinear dynamics of these
internal models. We evaluate our inference and planning methods in a suite of
simulated environments and an in-person user study, where a 7DOF robotic arm
teaches participants to be better teleoperators. While influencing human
learning remains an open problem, our results demonstrate that this influence
is possible and can be helpful in real human-robot interaction.
Related papers
- On the Effect of Robot Errors on Human Teaching Dynamics [1.7249361224827533]
We investigate how the presence and severity of robot errors affect three dimensions of human teaching dynamics.
Results show that people tend to spend more time teaching robots with errors.
Our findings offer valuable insights for designing effective interfaces for interactive learning.
arXiv Detail & Related papers (2024-09-15T19:02:34Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - SACSoN: Scalable Autonomous Control for Social Navigation [62.59274275261392]
We develop methods for training policies for socially unobtrusive navigation.
By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.
We collect a large dataset where an indoor mobile robot interacts with human bystanders.
arXiv Detail & Related papers (2023-06-02T19:07:52Z) - Affordances from Human Videos as a Versatile Representation for Robotics [31.248842798600606]
We train a visual affordance model that estimates where and how in the scene a human is likely to interact.
The structure of these behavioral affordances directly enables the robot to perform many complex tasks.
We show the efficacy of our approach, which we call VRB, across 4 real world environments, over 10 different tasks, and 2 robotic platforms operating in the wild.
arXiv Detail & Related papers (2023-04-17T17:59:34Z) - Real-World Humanoid Locomotion with Reinforcement Learning [92.85934954371099]
We present a fully learning-based approach for real-world humanoid locomotion.
Our controller can walk over various outdoor terrains, is robust to external disturbances, and can adapt in context.
arXiv Detail & Related papers (2023-03-06T18:59:09Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - DayDreamer: World Models for Physical Robot Learning [142.11031132529524]
Deep reinforcement learning is a common approach to robot learning but requires a large amount of trial and error to learn.
Many advances in robot learning rely on simulators.
In this paper, we apply Dreamer to 4 robots to learn online and directly in the real world, without simulators.
arXiv Detail & Related papers (2022-06-28T17:44:48Z) - Reasoning about Counterfactuals to Improve Human Inverse Reinforcement
Learning [5.072077366588174]
Humans naturally infer other agents' beliefs and desires by reasoning about their observable behavior.
We propose to incorporate the learner's current understanding of the robot's decision making into our model of human IRL.
We also propose a novel measure for estimating the difficulty for a human to predict instances of a robot's behavior in unseen environments.
arXiv Detail & Related papers (2022-03-03T17:06:37Z) - Neuroscience-inspired perception-action in robotics: applying active
inference for state estimation, control and self-perception [2.1067139116005595]
We discuss how neuroscience findings open up opportunities to improve current estimation and control algorithms in robotics.
This paper summarizes some experiments and lessons learned from developing such a computational model on real embodied platforms.
arXiv Detail & Related papers (2021-05-10T10:59:38Z) - Dynamically Switching Human Prediction Models for Efficient Planning [32.180808286226075]
We give the robot access to a suite of human models and enable it to assess the performance-computation trade-off online.
Our experiments in a driving simulator showcase how the robot can achieve performance comparable to always using the best human model.
arXiv Detail & Related papers (2021-03-13T23:48:09Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.