COMMA: Modeling Relationship among Motivations, Emotions and Actions in
Language-based Human Activities
- URL: http://arxiv.org/abs/2209.06470v1
- Date: Wed, 14 Sep 2022 07:54:20 GMT
- Title: COMMA: Modeling Relationship among Motivations, Emotions and Actions in
Language-based Human Activities
- Authors: Yuqiang Xie and Yue Hu and Wei Peng and Guanqun Bi and Luxi Xing
- Abstract summary: Motivations, emotions, and actions are inter-related essential factors in human activities.
We present the first study that investigates the viability of modeling motivations, emotions, and actions in language-based human activities.
- Score: 12.206523349060179
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Motivations, emotions, and actions are inter-related essential factors in
human activities. While motivations and emotions have long been considered at
the core of exploring how people take actions in human activities, there has
been relatively little research supporting analyzing the relationship between
human mental states and actions. We present the first study that investigates
the viability of modeling motivations, emotions, and actions in language-based
human activities, named COMMA (Cognitive Framework of Human Activities). Guided
by COMMA, we define three natural language processing tasks (emotion
understanding, motivation understanding and conditioned action generation), and
build a challenging dataset Hail through automatically extracting samples from
Story Commonsense. Experimental results on NLP applications prove the
effectiveness of modeling the relationship. Furthermore, our models inspired by
COMMA can better reveal the essential relationship among motivations, emotions
and actions than existing methods.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - SIFToM: Robust Spoken Instruction Following through Theory of Mind [51.326266354164716]
We present a cognitively inspired model, Speech Instruction Following through Theory of Mind (SIFToM), to enable robots to pragmatically follow human instructions under diverse speech conditions.
Results show that the SIFToM model outperforms state-of-the-art speech and language models, approaching human-level accuracy on challenging speech instruction following tasks.
arXiv Detail & Related papers (2024-09-17T02:36:10Z) - Limitations in Employing Natural Language Supervision for Sensor-Based Human Activity Recognition -- And Ways to Overcome Them [10.878632018296326]
Cross-modal contrastive pre-training between natural language and other modalities has demonstrated astonishing performance and effectiveness.
We investigate whether such natural language supervision can be used for wearable sensor based Human Activity Recognition (HAR)
We discover that-surprisingly-it performs substantially worse than standard end-to-end training and self-supervision.
arXiv Detail & Related papers (2024-08-21T22:30:36Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - A Grammatical Compositional Model for Video Action Detection [24.546886938243393]
We present a novel Grammatical Compositional Model (GCM) for action detection based on typical And-Or graphs.
Our model exploits the intrinsic structures and latent relationships of actions in a hierarchical manner to harness both the compositionality of grammar models and the capability of expressing rich features of DNNs.
arXiv Detail & Related papers (2023-10-04T15:24:00Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - CogIntAc: Modeling the Relationships between Intention, Emotion and
Action in Interactive Process from Cognitive Perspective [15.797390372732973]
We propose a novel cognitive framework of individual interaction.
The core of the framework is that individuals achieve interaction through external action driven by their inner intention.
arXiv Detail & Related papers (2022-05-07T03:54:51Z) - Modeling Intention, Emotion and External World in Dialogue Systems [14.724751780218297]
We propose a RelAtion Interaction Network (RAIN) to jointly model mutual relationships and explicitly integrate historical intention information.
The experiments on the dataset show that our model can take full advantage of the intention, emotion and action between individuals.
arXiv Detail & Related papers (2022-02-14T04:10:34Z) - Model-based analysis of brain activity reveals the hierarchy of language
in 305 subjects [82.81964713263483]
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli.
Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli.
arXiv Detail & Related papers (2021-10-12T15:30:21Z) - LEMMA: A Multi-view Dataset for Learning Multi-agent Multi-task
Activities [119.88381048477854]
We introduce the LEMMA dataset to provide a single home to address missing dimensions with meticulously designed settings.
We densely annotate the atomic-actions with human-object interactions to provide ground-truths of the compositionality, scheduling, and assignment of daily activities.
We hope this effort would drive the machine vision community to examine goal-directed human activities and further study the task scheduling and assignment in the real world.
arXiv Detail & Related papers (2020-07-31T00:13:54Z) - Human Activity Recognition based on Dynamic Spatio-Temporal Relations [10.635134217802783]
The description of a single human action and the modeling of the evolution of successive human actions are two major issues in human activity recognition.
We develop a method for human activity recognition that tackles these two issues.
arXiv Detail & Related papers (2020-06-29T15:49:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.