Toward Forgetting-Sensitive Referring Expression Generationfor
Integrated Robot Architectures
- URL: http://arxiv.org/abs/2007.08672v1
- Date: Thu, 16 Jul 2020 22:20:15 GMT
- Title: Toward Forgetting-Sensitive Referring Expression Generationfor
Integrated Robot Architectures
- Authors: Tom Williams and Torin Johnson and Will Culpepper and Kellyn Larson
- Abstract summary: We show how different models of working memory forgetting may be differentially effective at producing natural human-like referring expressions.
In this work, we computationalize two candidate models of working memory forgetting within a robot cognitive architecture, and demonstrate how they lead to cognitive availability-based differences in generated referring expressions.
- Score: 1.8456386856206592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To engage in human-like dialogue, robots require the ability to describe the
objects, locations, and people in their environment, a capability known as
"Referring Expression Generation." As speakers repeatedly refer to similar
objects, they tend to re-use properties from previous descriptions, in part to
help the listener, and in part due to cognitive availability of those
properties in working memory (WM). Because different theories of working memory
"forgetting" necessarily lead to differences in cognitive availability, we
hypothesize that they will similarly result in generation of different
referring expressions. To design effective intelligent agents, it is thus
necessary to determine how different models of forgetting may be differentially
effective at producing natural human-like referring expressions. In this work,
we computationalize two candidate models of working memory forgetting within a
robot cognitive architecture, and demonstrate how they lead to cognitive
availability-based differences in generated referring expressions.
Related papers
- A Multi-Modal Explainability Approach for Human-Aware Robots in Multi-Party Conversation [39.87346821309096]
We present an addressee estimation model with improved performance in comparison with the previous SOTA.
We also propose several ways to incorporate explainability and transparency in the aforementioned architecture.
arXiv Detail & Related papers (2024-05-20T13:09:32Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Agentivit\`a e telicit\`a in GilBERTo: implicazioni cognitive [77.71680953280436]
The goal of this study is to investigate whether a Transformer-based neural language model infers lexical semantics.
The semantic properties considered are telicity (also combined with definiteness) and agentivity.
arXiv Detail & Related papers (2023-07-06T10:52:22Z) - I am Only Happy When There is Light: The Impact of Environmental Changes
on Affective Facial Expressions Recognition [65.69256728493015]
We study the impact of different image conditions on the recognition of arousal from human facial expressions.
Our results show how the interpretation of human affective states can differ greatly in either the positive or negative direction.
arXiv Detail & Related papers (2022-10-28T16:28:26Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Contrast and Generation Make BART a Good Dialogue Emotion Recognizer [38.18867570050835]
Long-range contextual emotional relationships with speaker dependency play a crucial part in dialogue emotion recognition.
We adopt supervised contrastive learning to make different emotions mutually exclusive to identify similar emotions better.
We utilize an auxiliary response generation task to enhance the model's ability of handling context information.
arXiv Detail & Related papers (2021-12-21T13:38:00Z) - Perspective-corrected Spatial Referring Expression Generation for
Human-Robot Interaction [5.0726912337429795]
We propose a novel perspective-corrected spatial referring expression generation (PcSREG) approach for human-robot interaction.
The task of referring expression generation is simplified into the process of generating diverse spatial relation units.
We implement the proposed approach on a robot system and empirical experiments show that our approach can generate more effective spatial referring expressions.
arXiv Detail & Related papers (2021-04-04T08:00:02Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Towards Abstract Relational Learning in Human Robot Interaction [73.67226556788498]
Humans have a rich representation of the entities in their environment.
If robots need to interact successfully with humans, they need to represent entities, attributes, and generalizations in a similar way.
In this work, we address the problem of how to obtain these representations through human-robot interaction.
arXiv Detail & Related papers (2020-11-20T12:06:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.