Monte Carlo Tree Search for Interpreting Stress in Natural Language
- URL: http://arxiv.org/abs/2204.08105v1
- Date: Sun, 17 Apr 2022 23:06:01 GMT
- Title: Monte Carlo Tree Search for Interpreting Stress in Natural Language
- Authors: Kyle Swanson, Joy Hsu, Mirac Suzgun
- Abstract summary: We present a new method for explaining a person's mental state from text using Monte Carlo tree search (MCTS)
Our algorithm can find both explanations that depend on the particular context of the text and those that are context-independent.
- Score: 4.898659895355356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Natural language processing can facilitate the analysis of a person's mental
state from text they have written. Previous studies have developed models that
can predict whether a person is experiencing a mental health condition from
social media posts with high accuracy. Yet, these models cannot explain why the
person is experiencing a particular mental state. In this work, we present a
new method for explaining a person's mental state from text using Monte Carlo
tree search (MCTS). Our MCTS algorithm employs trained classification models to
guide the search for key phrases that explain the writer's mental state in a
concise, interpretable manner. Furthermore, our algorithm can find both
explanations that depend on the particular context of the text (e.g., a recent
breakup) and those that are context-independent. Using a dataset of Reddit
posts that exhibit stress, we demonstrate the ability of our MCTS algorithm to
identify interpretable explanations for a person's feeling of stress in both a
context-dependent and context-independent manner.
Related papers
- Zero-shot Explainable Mental Health Analysis on Social Media by Incorporating Mental Scales [23.94585145560042]
Mental Analysis by Incorporating Mental Scales (MAIMS) is inspired by the psychological assessment practice of using scales to evaluate mental states.
First, the patient completes mental scales, and second, the psychologist interprets the collected information from the mental scales and makes informed decisions.
arXiv Detail & Related papers (2024-02-09T09:44:06Z) - Reliability Analysis of Psychological Concept Extraction and
Classification in User-penned Text [9.26840677406494]
We use the LoST dataset to capture nuanced textual cues that suggest the presence of low self-esteem in the posts of Reddit users.
Our findings suggest the need of shifting the focus of PLMs from Trigger and Consequences to a more comprehensive explanation.
arXiv Detail & Related papers (2024-01-12T17:19:14Z) - Chat2Brain: A Method for Mapping Open-Ended Semantic Queries to Brain
Activation Maps [59.648646222905235]
We propose a method called Chat2Brain that combines LLMs to basic text-2-image model, known as Text2Brain, to map semantic queries to brain activation maps.
We demonstrate that Chat2Brain can synthesize plausible neural activation patterns for more complex tasks of text queries.
arXiv Detail & Related papers (2023-09-10T13:06:45Z) - LonXplain: Lonesomeness as a Consequence of Mental Disturbance in Reddit
Posts [0.41998444721319217]
Social media is a potential source of information that infers latent mental states through Natural Language Processing (NLP)
Existing literature on psychological theories points to loneliness as the major consequence of interpersonal risk factors.
We formulate lonesomeness detection in social media posts as an explainable binary classification problem.
arXiv Detail & Related papers (2023-05-30T04:21:24Z) - Towards Interpretable Mental Health Analysis with Large Language Models [27.776003210275608]
We evaluate the mental health analysis and emotional reasoning ability of large language models (LLMs) on 11 datasets across 5 tasks.
Based on prompts, we explore LLMs for interpretable mental health analysis by instructing them to generate explanations for each of their decisions.
We convey strict human evaluations to assess the quality of the generated explanations, leading to a novel dataset with 163 human-assessed explanations.
arXiv Detail & Related papers (2023-04-06T19:53:59Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional,
and Explainable Reasoning [59.16962123636579]
This paper proposes a new take on Prolog-based inference engines.
We replace handcrafted rules with a combination of neural language modeling, guided generation, and semi dense retrieval.
Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA.
arXiv Detail & Related papers (2022-09-16T00:54:44Z) - Natural Language Rationales with Full-Stack Visual Reasoning: From
Pixels to Semantic Frames to Commonsense Graphs [106.15931418425906]
We present the first study focused on generating natural language rationales across several complex visual reasoning tasks.
We present RationaleVT Transformer, an integrated model that learns to generate free-text rationales by combining pretrained language models with object recognition, grounded visual semantic frames, and visual commonsense graphs.
Our experiments show that the base pretrained language model benefits from visual adaptation and that free-text rationalization is a promising research direction to complement model interpretability for complex visual-textual reasoning tasks.
arXiv Detail & Related papers (2020-10-15T05:08:56Z) - Sequential Explanations with Mental Model-Based Policies [20.64968620536829]
We apply a reinforcement learning framework to provide explanations based on the explainee's mental model.
We conduct novel online human experiments where explanations are selected and presented to participants.
Our results suggest that mental model-based policies may increase interpretability over multiple sequential explanations.
arXiv Detail & Related papers (2020-07-17T14:43:46Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - ESPRIT: Explaining Solutions to Physical Reasoning Tasks [106.77019206219984]
ESPRIT is a framework for commonsense reasoning about qualitative physics in natural language.
Our framework learns to generate explanations of how the physical simulation will causally evolve so that an agent or a human can easily reason about a solution.
Human evaluations indicate that ESPRIT produces crucial fine-grained details and has high coverage of physical concepts compared to even human annotations.
arXiv Detail & Related papers (2020-05-02T07:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.