A Personalized Household Assistive Robot that Learns and Creates New
Breakfast Options through Human-Robot Interaction
- URL: http://arxiv.org/abs/2307.00114v1
- Date: Fri, 30 Jun 2023 19:57:15 GMT
- Title: A Personalized Household Assistive Robot that Learns and Creates New
Breakfast Options through Human-Robot Interaction
- Authors: Ali Ayub, Chrystopher L. Nehaniv and Kerstin Dautenhahn
- Abstract summary: We present a cognitive architecture for a household assistive robot that can learn personalized breakfast options from its users.
The architecture can also use the learned knowledge to create new breakfast options over a longer period of time.
The architecture is integrated with the Fetch mobile manipulator robot and validated, as a proof-of-concept system evaluation.
- Score: 9.475039534437332
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For robots to assist users with household tasks, they must first learn about
the tasks from the users. Further, performing the same task every day, in the
same way, can become boring for the robot's user(s), therefore, assistive
robots must find creative ways to perform tasks in the household. In this
paper, we present a cognitive architecture for a household assistive robot that
can learn personalized breakfast options from its users and then use the
learned knowledge to set up a table for breakfast. The architecture can also
use the learned knowledge to create new breakfast options over a longer period
of time. The proposed cognitive architecture combines state-of-the-art
perceptual learning algorithms, computational implementation of cognitive
models of memory encoding and learning, a task planner for picking and placing
objects in the household, a graphical user interface (GUI) to interact with the
user and a novel approach for creating new breakfast options using the learned
knowledge. The architecture is integrated with the Fetch mobile manipulator
robot and validated, as a proof-of-concept system evaluation in a large indoor
environment with multiple kitchen objects. Experimental results demonstrate the
effectiveness of our architecture to learn personalized breakfast options from
the user and generate new breakfast options never learned by the robot.
Related papers
- Interactive Continual Learning Architecture for Long-Term
Personalization of Home Service Robots [11.648129262452116]
We develop a novel interactive continual learning architecture for continual learning of semantic knowledge in a home environment through human-robot interaction.
The architecture builds on core cognitive principles of learning and memory for efficient and real-time learning of new knowledge from humans.
arXiv Detail & Related papers (2024-03-06T04:55:39Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Don't Forget to Buy Milk: Contextually Aware Grocery Reminder Household
Robot [8.430502131775722]
We present a computational architecture that can allow a robot to learn personalized contextual knowledge of a household.
The architecture can then use the learned knowledge to make predictions about missing items from the household over a long period of time.
The architecture is integrated with the Fetch mobile manipulator robot and validated in a large indoor environment.
arXiv Detail & Related papers (2022-07-19T03:38:43Z) - TAILOR: Teaching with Active and Incremental Learning for Object
Registration [18.941458386996544]
We present TAILOR -- a method and system for object registration with active and incremental learning.
We demonstrate the effectiveness of our method with a KUKA robot to learn novel objects used in a real-world gearbox assembly task through natural interactions.
arXiv Detail & Related papers (2022-05-24T01:14:00Z) - iRoPro: An interactive Robot Programming Framework [2.7651063843287718]
iRoPro allows users with little to no technical background to teach a robot new reusable actions.
We implement iRoPro as an end-to-end system on a Baxter Research Robot.
arXiv Detail & Related papers (2021-12-08T13:53:43Z) - Functional Task Tree Generation from a Knowledge Graph to Solve Unseen
Problems [5.400294730456784]
Unlike humans, robots cannot creatively adapt to novel scenarios.
Existing knowledge in the form of a knowledge graph is used as a base of reference to create task trees.
Our results indicate that the proposed method can produce task plans with high accuracy even for never-before-seen ingredient combinations.
arXiv Detail & Related papers (2021-12-04T21:28:22Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - What Can I Do Here? Learning New Skills by Imagining Visual Affordances [128.65223577406587]
We show how generative models of possible outcomes can allow a robot to learn visual representations of affordances.
In effect, prior data is used to learn what kinds of outcomes may be possible, such that when the robot encounters an unfamiliar setting, it can sample potential outcomes from its model.
We show that visuomotor affordance learning (VAL) can be used to train goal-conditioned policies that operate on raw image inputs.
arXiv Detail & Related papers (2021-06-01T17:58:02Z) - Actionable Models: Unsupervised Offline Reinforcement Learning of
Robotic Skills [93.12417203541948]
We propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset.
We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects.
arXiv Detail & Related papers (2021-04-15T20:10:11Z) - Visionary: Vision architecture discovery for robot learning [58.67846907923373]
We propose a vision-based architecture search algorithm for robot manipulation learning, which discovers interactions between low dimension action inputs and high dimensional visual inputs.
Our approach automatically designs architectures while training on the task - discovering novel ways of combining and attending image feature representations with actions as well as features from previous layers.
arXiv Detail & Related papers (2021-03-26T17:51:43Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.