Learning Human Cognitive Appraisal Through Reinforcement Memory Unit
- URL: http://arxiv.org/abs/2208.03473v1
- Date: Sat, 6 Aug 2022 08:56:55 GMT
- Title: Learning Human Cognitive Appraisal Through Reinforcement Memory Unit
- Authors: Yaosi Hu and Zhenzhong Chen
- Abstract summary: We propose a memory-enhancing mechanism for recurrent neural networks that exploits the effect of human cognitive appraisal in sequential assessment tasks.
We conceptualize the memory-enhancing mechanism as Reinforcement Memory Unit (RMU) that contains an appraisal state together with two positive and negative reinforcement memories.
- Score: 63.83306892013521
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel memory-enhancing mechanism for recurrent neural networks
that exploits the effect of human cognitive appraisal in sequential assessment
tasks. We conceptualize the memory-enhancing mechanism as Reinforcement Memory
Unit (RMU) that contains an appraisal state together with two positive and
negative reinforcement memories. The two reinforcement memories are decayed or
strengthened by stronger stimulus. Thereafter the appraisal state is updated
through the competition of positive and negative reinforcement memories.
Therefore, RMU can learn the appraisal variation under violent changing of the
stimuli for estimating human affective experience. As shown in the experiments
of video quality assessment and video quality of experience tasks, the proposed
reinforcement memory unit achieves superior performance among recurrent neural
networks, that demonstrates the effectiveness of RMU for modeling human
cognitive appraisal.
Related papers
- Stable Hadamard Memory: Revitalizing Memory-Augmented Agents for Reinforcement Learning [64.93848182403116]
Current deep-learning memory models struggle in reinforcement learning environments that are partially observable and long-term.
We introduce the Stable Hadamard Memory, a novel memory model for reinforcement learning agents.
Our approach significantly outperforms state-of-the-art memory-based methods on challenging partially observable benchmarks.
arXiv Detail & Related papers (2024-10-14T03:50:17Z) - MADial-Bench: Towards Real-world Evaluation of Memory-Augmented Dialogue Generation [15.64077949677469]
We present a novel Memory-Augmented Dialogue Benchmark (MADail-Bench) to evaluate the effectiveness of memory-augmented dialogue systems (MADS)
The benchmark assesses two tasks separately: memory retrieval and memory recognition with the incorporation of both passive and proactive memory recall data.
Results from cutting-edge embedding models and large language models on this benchmark indicate the potential for further advancement.
arXiv Detail & Related papers (2024-09-23T17:38:41Z) - Brain-Inspired Continual Learning-Robust Feature Distillation and Re-Consolidation for Class Incremental Learning [0.0]
We introduce a novel framework comprising two core concepts: feature distillation and re-consolidation.
Our framework, named Robust Rehearsal, addresses the challenge of catastrophic forgetting inherent in continual learning systems.
Experiments conducted on CIFAR10, CIFAR100, and real-world helicopter attitude datasets showcase the superior performance of CL models trained with Robust Rehearsal.
arXiv Detail & Related papers (2024-04-22T21:30:11Z) - Estimating Personal Model Parameters from Utterances in Model-based
Reminiscence [0.0]
This study utilized a computational model of personal memory recollection based on a cognitive architecture adaptive control of thought-rational (ACT-R)
We proposed a method for estimating the internal states of users through repeated interactions with the memory model.
Results confirmed the ability of the method to estimate the memory retrieval parameters of the model from the utterances of the user.
arXiv Detail & Related papers (2022-08-15T09:33:23Z) - Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications [72.52228843498193]
In training deep learning models, a compromise often must be made between performance and trust.
In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients.
arXiv Detail & Related papers (2021-12-14T21:16:52Z) - Learning Personal Representations from fMRIby Predicting Neurofeedback
Performance [52.77024349608834]
We present a deep neural network method for learning a personal representation for individuals performing a self neuromodulation task, guided by functional MRI (fMRI)
The representation is learned by a self-supervised recurrent neural network, that predicts the Amygdala activity in the next fMRI frame given recent fMRI frames and is conditioned on the learned individual representation.
arXiv Detail & Related papers (2021-12-06T10:16:54Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Association: Remind Your GAN not to Forget [11.653696510515807]
We propose a brain-like approach that imitates the associative learning process to achieve continual learning.
Experiments demonstrate the effectiveness of our method in alleviating catastrophic forgetting on image-to-image translation tasks.
arXiv Detail & Related papers (2020-11-27T04:43:15Z) - Facial Feedback for Reinforcement Learning: A Case Study and Offline
Analysis Using the TAMER Framework [51.237191651923666]
We investigate the potential of agent learning from trainers' facial expressions via interpreting them as evaluative feedback.
With designed CNN-RNN model, our analysis shows that telling trainers to use facial expressions and competition can improve the accuracies for estimating positive and negative feedback.
Our results with a simulation experiment show that learning solely from predicted feedback based on facial expressions is possible.
arXiv Detail & Related papers (2020-01-23T17:50:57Z) - Augmented Replay Memory in Reinforcement Learning With Continuous
Control [1.6752182911522522]
Online reinforcement learning agents are currently able to process an increasing amount of data by converting it into a higher order value functions.
This expansion increases the agent's state space enabling it to scale up to a more complex problems but also increases the risk of forgetting by learning on redundant or conflicting data.
To improve the approximation of a large amount of data, a random mini-batch of the past experiences that are stored in the replay memory buffer is often replayed at each learning step.
arXiv Detail & Related papers (2019-12-29T20:07:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.