A Computational Approach to Measure Empathy and Theory-of-Mind from
Written Texts
- URL: http://arxiv.org/abs/2108.11810v1
- Date: Thu, 26 Aug 2021 14:23:28 GMT
- Title: A Computational Approach to Measure Empathy and Theory-of-Mind from
Written Texts
- Authors: Yoon Kyung Lee, Inju Lee, Jae Eun Park, Yoonwon Jung, Jiwon Kim, Sowon
Hahn
- Abstract summary: Theory-of-mind (ToM) is a human ability to infer the intentions and thoughts of others.
ToM-Diary is a crowdsourced 18,238 diaries with 74,014 Korean sentences annotated with different ToM levels.
- Score: 5.105390149198602
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Theory-of-mind (ToM), a human ability to infer the intentions and thoughts of
others, is an essential part of empathetic experiences. We provide here the
framework for using NLP models to measure ToM expressed in written texts. For
this purpose, we introduce ToM-Diary, a crowdsourced 18,238 diaries with 74,014
Korean sentences annotated with different ToM levels. Each diary was annotated
with ToM levels by trained psychology students and reviewed by selected
psychology experts. The annotators first divided the diaries based on whether
they mentioned other people: self-focused and other-focused. Examples of
self-focused sentences are "I am feeling good". The other-focused sentences
were further classified into different levels. These levels differ by whether
the writer 1) mentions the presence of others without inferring their mental
state(e.g., I saw a man walking down the street), 2) fails to take the
perspective of others (e.g., I don't understand why they refuse to wear masks),
or 3) successfully takes the perspective of others (It must have been hard for
them to continue working). We tested whether state-of-the-art transformer-based
models (e.g., BERT) could predict underlying ToM levels in sentences. We found
that BERT more successfully detected self-focused sentences than other-focused
ones. Sentences that successfully take the perspective of others (the highest
ToM level) were the most difficult to predict. Our study suggests a promising
direction for large-scale and computational approaches for identifying the
ability of authors to empathize and take the perspective of others. The dataset
is at [URL](https://github.com/humanfactorspsych/covid19-tom-empathy-diary)
Related papers
- SimpleToM: Exposing the Gap between Explicit ToM Inference and Implicit ToM Application in LLMs [72.06808538971487]
We test whether large language models (LLMs) can implicitly apply a "theory of mind" (ToM) to predict behavior.
We create a new dataset, SimpleTom, containing stories with three questions that test different degrees of ToM reasoning.
To our knowledge, SimpleToM is the first dataset to explore downstream reasoning requiring knowledge of mental states in realistic scenarios.
arXiv Detail & Related papers (2024-10-17T15:15:00Z) - Measuring Psychological Depth in Language Models [50.48914935872879]
We introduce the Psychological Depth Scale (PDS), a novel framework rooted in literary theory that measures an LLM's ability to produce authentic and narratively complex stories.
We empirically validate our framework by showing that humans can consistently evaluate stories based on PDS (0.72 Krippendorff's alpha)
Surprisingly, GPT-4 stories either surpassed or were statistically indistinguishable from highly-rated human-written stories sourced from Reddit.
arXiv Detail & Related papers (2024-06-18T14:51:54Z) - Towards a Psychology of Machines: Large Language Models Predict Human Memory [0.0]
Large language models (LLMs) are excelling across various tasks despite not being based on human cognition.
This study examines ChatGPT's ability to predict human performance in a language-based memory task.
arXiv Detail & Related papers (2024-03-08T08:41:14Z) - DepressionEmo: A novel dataset for multilabel classification of
depression emotions [6.26397257917403]
DepressionEmo is a dataset designed to detect 8 emotions associated with depression by 6037 examples of long Reddit user posts.
This dataset was created through a majority vote over inputs by zero-shot classifications from pre-trained models.
We provide several text classification methods classified into two groups: machine learning methods such as SVM, XGBoost, and Light GBM; and deep learning methods such as BERT, GAN-BERT, and BART.
arXiv Detail & Related papers (2024-01-09T16:25:31Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Few-Shot Character Understanding in Movies as an Assessment to
Meta-Learning of Theory-of-Mind [47.13015852330866]
Humans can quickly understand new fictional characters with a few observations, mainly by drawing analogies to fictional and real people they already know.
This reflects the few-shot and meta-learning essence of humans' inference of characters' mental states, i.e., theory-of-mind (ToM)
We fill this gap with a novel NLP dataset, ToM-in-AMC, the first assessment of machines' meta-learning of ToM in a realistic narrative understanding scenario.
arXiv Detail & Related papers (2022-11-09T05:06:12Z) - EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments [75.11753644302385]
Empathetic conversational agents should not only understand what is being discussed, but also acknowledge the implied feelings of the conversation partner.
We propose a method based on a transformer pretrained language model (T5)
We evaluate our model on the EmpatheticDialogues dataset using both automated metrics and human evaluation.
arXiv Detail & Related papers (2021-10-30T19:04:48Z) - Perspective-taking and Pragmatics for Generating Empathetic Responses
Focused on Emotion Causes [50.569762345799354]
We argue that two issues must be tackled at the same time: (i) identifying which word is the cause for the other's emotion from his or her utterance and (ii) reflecting those specific words in the response generation.
Taking inspiration from social cognition, we leverage a generative estimator to infer emotion cause words from utterances with no word-level label.
arXiv Detail & Related papers (2021-09-18T04:22:49Z) - A Computational Approach to Understanding Empathy Expressed in
Text-Based Mental Health Support [11.736179504987712]
We present a computational approach to understanding how empathy is expressed in online mental health platforms.
We develop a novel unifying theoretically-grounded framework for characterizing the communication of empathy in text-based conversations.
arXiv Detail & Related papers (2020-09-17T17:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.