COMMA: Modeling Relationship among Motivations, Emotions and Actions in
Language-based Human Activities
- URL: http://arxiv.org/abs/2209.06470v1
- Date: Wed, 14 Sep 2022 07:54:20 GMT
- Title: COMMA: Modeling Relationship among Motivations, Emotions and Actions in
Language-based Human Activities
- Authors: Yuqiang Xie and Yue Hu and Wei Peng and Guanqun Bi and Luxi Xing
- Abstract summary: Motivations, emotions, and actions are inter-related essential factors in human activities.
We present the first study that investigates the viability of modeling motivations, emotions, and actions in language-based human activities.
- Score: 12.206523349060179
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Motivations, emotions, and actions are inter-related essential factors in
human activities. While motivations and emotions have long been considered at
the core of exploring how people take actions in human activities, there has
been relatively little research supporting analyzing the relationship between
human mental states and actions. We present the first study that investigates
the viability of modeling motivations, emotions, and actions in language-based
human activities, named COMMA (Cognitive Framework of Human Activities). Guided
by COMMA, we define three natural language processing tasks (emotion
understanding, motivation understanding and conditioned action generation), and
build a challenging dataset Hail through automatically extracting samples from
Story Commonsense. Experimental results on NLP applications prove the
effectiveness of modeling the relationship. Furthermore, our models inspired by
COMMA can better reveal the essential relationship among motivations, emotions
and actions than existing methods.
Related papers
- PersLLM: A Personified Training Approach for Large Language Models [63.75008885222351]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Development of Compositionality and Generalization through Interactive Learning of Language and Action of Robots [1.7624347338410742]
We propose a brain-inspired neural network model that integrates vision, proprioception, and language into a framework of predictive coding and active inference.
Our results show that generalization in learning to unlearned verb-noun compositions, is significantly enhanced when training variations of task composition are increased.
arXiv Detail & Related papers (2024-03-29T06:22:37Z) - Generating Human-Centric Visual Cues for Human-Object Interaction
Detection via Large Vision-Language Models [59.611697856666304]
Human-object interaction (HOI) detection aims at detecting human-object pairs and predicting their interactions.
We propose three prompts with VLM to generate human-centric visual cues within an image from multiple perspectives of humans.
We develop a transformer-based multimodal fusion module with multitower architecture to integrate visual cue features into the instance and interaction decoders.
arXiv Detail & Related papers (2023-11-26T09:11:32Z) - A Grammatical Compositional Model for Video Action Detection [24.546886938243393]
We present a novel Grammatical Compositional Model (GCM) for action detection based on typical And-Or graphs.
Our model exploits the intrinsic structures and latent relationships of actions in a hierarchical manner to harness both the compositionality of grammar models and the capability of expressing rich features of DNNs.
arXiv Detail & Related papers (2023-10-04T15:24:00Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Expanding the Role of Affective Phenomena in Multimodal Interaction
Research [57.069159905961214]
We examined over 16,000 papers from selected conferences in multimodal interaction, affective computing, and natural language processing.
We identify 910 affect-related papers and present our analysis of the role of affective phenomena in these papers.
We find limited research on how affect and emotion predictions might be used by AI systems to enhance machine understanding of human social behaviors and cognitive states.
arXiv Detail & Related papers (2023-05-18T09:08:39Z) - CogIntAc: Modeling the Relationships between Intention, Emotion and
Action in Interactive Process from Cognitive Perspective [15.797390372732973]
We propose a novel cognitive framework of individual interaction.
The core of the framework is that individuals achieve interaction through external action driven by their inner intention.
arXiv Detail & Related papers (2022-05-07T03:54:51Z) - Modeling Intention, Emotion and External World in Dialogue Systems [14.724751780218297]
We propose a RelAtion Interaction Network (RAIN) to jointly model mutual relationships and explicitly integrate historical intention information.
The experiments on the dataset show that our model can take full advantage of the intention, emotion and action between individuals.
arXiv Detail & Related papers (2022-02-14T04:10:34Z) - Model-based analysis of brain activity reveals the hierarchy of language
in 305 subjects [82.81964713263483]
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli.
Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli.
arXiv Detail & Related papers (2021-10-12T15:30:21Z) - LEMMA: A Multi-view Dataset for Learning Multi-agent Multi-task
Activities [119.88381048477854]
We introduce the LEMMA dataset to provide a single home to address missing dimensions with meticulously designed settings.
We densely annotate the atomic-actions with human-object interactions to provide ground-truths of the compositionality, scheduling, and assignment of daily activities.
We hope this effort would drive the machine vision community to examine goal-directed human activities and further study the task scheduling and assignment in the real world.
arXiv Detail & Related papers (2020-07-31T00:13:54Z) - Human Activity Recognition based on Dynamic Spatio-Temporal Relations [10.635134217802783]
The description of a single human action and the modeling of the evolution of successive human actions are two major issues in human activity recognition.
We develop a method for human activity recognition that tackles these two issues.
arXiv Detail & Related papers (2020-06-29T15:49:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.