From Interactive to Co-Constructive Task Learning
- URL: http://arxiv.org/abs/2305.15535v1
- Date: Wed, 24 May 2023 19:45:30 GMT
- Title: From Interactive to Co-Constructive Task Learning
- Authors: Anna-Lisa Vollmer, Daniel Leidner, Michael Beetz, Britta Wrede
- Abstract summary: We will review current proposals for interactive task learning and discuss their main contributions.
We then discuss our notion of co-construction and summarize research insights from adult-child and human-robot interactions.
- Score: 13.493719155524404
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Humans have developed the capability to teach relevant aspects of new or
adapted tasks to a social peer with very few task demonstrations by making use
of scaffolding strategies that leverage prior knowledge and importantly prior
joint experience to yield a joint understanding and a joint execution of the
required steps to solve the task. This process has been discovered and analyzed
in parent-infant interaction and constitutes a ``co-construction'' as it allows
both, the teacher and the learner, to jointly contribute to the task. We
propose to focus research in robot interactive learning on this co-construction
process to enable robots to learn from non-expert users in everyday situations.
In the following, we will review current proposals for interactive task
learning and discuss their main contributions with respect to the entailing
interaction. We then discuss our notion of co-construction and summarize
research insights from adult-child and human-robot interactions to elucidate
its nature in more detail. From this overview we finally derive research
desiderata that entail the dimensions architecture, representation, interaction
and explainability.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - The Power of Combined Modalities in Interactive Robot Learning [0.0]
This study contributes to the evolving field of robot learning in interaction with humans, examining the impact of diverse input modalities on learning outcomes.
It introduces the concept of "meta-modalities" which encapsulate additional forms of feedback beyond the traditional preference and scalar feedback mechanisms.
arXiv Detail & Related papers (2024-05-13T14:59:44Z) - Understanding Entrainment in Human Groups: Optimising Human-Robot
Collaboration from Lessons Learned during Human-Human Collaboration [7.670608800568494]
Successful entrainment during collaboration positively affects trust, willingness to collaborate, and likeability towards collaborators.
This paper contributes to the Human-Computer/Robot Interaction (HCI/HRI) using a human-centred approach to identify characteristics of entrainment during pair- and group-based collaboration.
arXiv Detail & Related papers (2024-02-23T16:42:17Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Automatic Context-Driven Inference of Engagement in HMI: A Survey [6.479224589451863]
This paper presents a survey on engagement inference for human-machine interaction.
It entails interdisciplinary definition, engagement components and factors, publicly available datasets, ground truth assessment, and most commonly used features and methods.
It serves as a guide for the development of future human-machine interaction interfaces with reliable context-aware engagement inference capability.
arXiv Detail & Related papers (2022-09-30T10:46:13Z) - Autonomous Open-Ended Learning of Tasks with Non-Stationary
Interdependencies [64.0476282000118]
Intrinsic motivations have proven to generate a task-agnostic signal to properly allocate the training time amongst goals.
While the majority of works in the field of intrinsically motivated open-ended learning focus on scenarios where goals are independent from each other, only few of them studied the autonomous acquisition of interdependent tasks.
In particular, we first deepen the analysis of a previous system, showing the importance of incorporating information about the relationships between tasks at a higher level of the architecture.
Then we introduce H-GRAIL, a new system that extends the previous one by adding a new learning layer to store the autonomously acquired sequences
arXiv Detail & Related papers (2022-05-16T10:43:01Z) - Teachable Reinforcement Learning via Advice Distillation [161.43457947665073]
We propose a new supervision paradigm for interactive learning based on "teachable" decision-making systems that learn from structured advice provided by an external teacher.
We show that agents that learn from advice can acquire new skills with significantly less human supervision than standard reinforcement learning algorithms.
arXiv Detail & Related papers (2022-03-19T03:22:57Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Investigating Human Response, Behaviour, and Preference in Joint-Task
Interaction [3.774610219328564]
We have designed an experiment in order to examine human behaviour and response as they interact with Explainable Planning (XAIP) agents.
We also present the results from an empirical analysis where we examined the behaviour of the two agents for simulated users.
arXiv Detail & Related papers (2020-11-27T22:16:59Z) - LEMMA: A Multi-view Dataset for Learning Multi-agent Multi-task
Activities [119.88381048477854]
We introduce the LEMMA dataset to provide a single home to address missing dimensions with meticulously designed settings.
We densely annotate the atomic-actions with human-object interactions to provide ground-truths of the compositionality, scheduling, and assignment of daily activities.
We hope this effort would drive the machine vision community to examine goal-directed human activities and further study the task scheduling and assignment in the real world.
arXiv Detail & Related papers (2020-07-31T00:13:54Z) - Towards Effective Human-AI Collaboration in GUI-Based Interactive Task
Learning Agents [29.413358312233253]
We argue that a key challenge in enabling usable and useful interactive task learning for intelligent agents is to facilitate effective Human-AI collaboration.
We reflect on our past 5 years of efforts on designing, developing and studying the SUGILITE system.
arXiv Detail & Related papers (2020-03-05T14:12:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.