Don't Forget to Buy Milk: Contextually Aware Grocery Reminder Household
Robot
- URL: http://arxiv.org/abs/2207.09050v2
- Date: Wed, 20 Jul 2022 00:51:05 GMT
- Title: Don't Forget to Buy Milk: Contextually Aware Grocery Reminder Household
Robot
- Authors: Ali Ayub, Chrystopher L. Nehaniv, and Kerstin Dautenhahn
- Abstract summary: We present a computational architecture that can allow a robot to learn personalized contextual knowledge of a household.
The architecture can then use the learned knowledge to make predictions about missing items from the household over a long period of time.
The architecture is integrated with the Fetch mobile manipulator robot and validated in a large indoor environment.
- Score: 8.430502131775722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Assistive robots operating in household environments would require items to
be available in the house to perform assistive tasks. However, when these items
run out, the assistive robot must remind its user to buy the missing items. In
this paper, we present a computational architecture that can allow a robot to
learn personalized contextual knowledge of a household through interactions
with its user. The architecture can then use the learned knowledge to make
predictions about missing items from the household over a long period of time.
The architecture integrates state-of-the-art perceptual learning algorithms,
cognitive models of memory encoding and learning, a reasoning module for
predicting missing items from the household, and a graphical user interface
(GUI) to interact with the user. The architecture is integrated with the Fetch
mobile manipulator robot and validated in a large indoor environment with
multiple contexts and objects. Our experimental results show that the robot can
adapt to an environment by learning contextual knowledge through interactions
with its user. The robot can also use the learned knowledge to correctly
predict missing items over multiple weeks and it is robust against sensory and
perceptual errors.
Related papers
- Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction [52.12746368727368]
Differentiable simulation has become a powerful tool for system identification.
Our approach calibrates object properties by using information from the robot, without relying on data from the object itself.
We demonstrate the effectiveness of our method on a low-cost robotic platform.
arXiv Detail & Related papers (2024-10-04T20:48:38Z) - SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning [17.125080112897102]
This paper addresses a challenging interactive task learning scenario where the robot is unaware of a concept that's key to solving the instructed task.
We propose SECURE, an interactive task learning framework designed to solve such problems by fixing a deficient domain model using embodied conversation.
Using SECURE, the robot not only learns from the user's corrective feedback when it makes a mistake, but it also learns to make strategic dialogue decisions for revealing useful evidence about novel concepts for solving the instructed task.
arXiv Detail & Related papers (2024-09-26T11:40:07Z) - Interactive Continual Learning Architecture for Long-Term
Personalization of Home Service Robots [11.648129262452116]
We develop a novel interactive continual learning architecture for continual learning of semantic knowledge in a home environment through human-robot interaction.
The architecture builds on core cognitive principles of learning and memory for efficient and real-time learning of new knowledge from humans.
arXiv Detail & Related papers (2024-03-06T04:55:39Z) - A Personalized Household Assistive Robot that Learns and Creates New
Breakfast Options through Human-Robot Interaction [9.475039534437332]
We present a cognitive architecture for a household assistive robot that can learn personalized breakfast options from its users.
The architecture can also use the learned knowledge to create new breakfast options over a longer period of time.
The architecture is integrated with the Fetch mobile manipulator robot and validated, as a proof-of-concept system evaluation.
arXiv Detail & Related papers (2023-06-30T19:57:15Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Knowledge Acquisition and Completion for Long-Term Human-Robot
Interactions using Knowledge Graph Embedding [0.0]
We propose an architecture to gather data from users and environments in long-runs of continual learning.
We adopt Knowledge Graph Embedding techniques to generalize the acquired information with the goal of incrementally extending the robot's inner representation of the environment.
We evaluate the performance of the overall continual learning architecture by measuring the capabilities of the robot of learning entities and relations coming from unknown contexts.
arXiv Detail & Related papers (2023-01-17T12:23:40Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z) - The State of Lifelong Learning in Service Robots: Current Bottlenecks in
Object Perception and Manipulation [3.7858180627124463]
State-of-the-art continues to improve to make a proper coupling between object perception and manipulation.
In most of the cases, robots are able to recognize various objects, and quickly plan a collision-free trajectory to grasp a target object.
In such environments, no matter how extensive the training data used for batch learning, a robot will always face new objects.
apart from robot self-learning, non-expert users could interactively guide the process of experience acquisition.
arXiv Detail & Related papers (2020-03-18T11:00:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.