How are you? Introducing stress-based text tailoring
- URL: http://arxiv.org/abs/2007.09970v1
- Date: Mon, 20 Jul 2020 09:43:11 GMT
- Title: How are you? Introducing stress-based text tailoring
- Authors: Simone Balloccu, Ehud Reiter, Alexandra Johnstone, Claire Fyfe
- Abstract summary: We discuss customising texts based on user stress level, as it could represent a critical factor when it comes to user engagement and behavioural change.
We first show a real-world example in which user behaviour is influenced by stress, then, after discussing which tools can be employed to assess and measure it, we propose an initial method for tailoring the document.
- Score: 63.128912221732946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Can stress affect not only your life but also how you read and interpret a
text? Healthcare has shown evidence of such dynamics and in this short paper we
discuss customising texts based on user stress level, as it could represent a
critical factor when it comes to user engagement and behavioural change. We
first show a real-world example in which user behaviour is influenced by
stress, then, after discussing which tools can be employed to assess and
measure it, we propose an initial method for tailoring the document by
exploiting complexity reduction and affect enforcement. The result is a short
and encouraging text which requires less commitment to be read and understood.
We believe this work in progress can raise some interesting questions on a
topic that is often overlooked in NLG.
Related papers
- StressPrompt: Does Stress Impact Large Language Models and Human Performance Similarly? [7.573284169975824]
This study explores whether Large Language Models (LLMs) exhibit stress responses similar to those of humans.
We developed a novel set of prompts, termed StressPrompt, designed to induce varying levels of stress.
The findings suggest that LLMs, like humans, perform optimally under moderate stress, consistent with the Yerkes-Dodson law.
arXiv Detail & Related papers (2024-09-14T08:32:31Z) - CoS: Enhancing Personalization and Mitigating Bias with Context Steering [5.064910647314323]
Context can significantly shape the response of a large language model (LLM)
We propose Context Steering (CoS) - a training-free method that can be easily applied to autoregressive LLMs at inference time.
We showcase a variety of applications of CoS including amplifying the contextual influence to achieve better personalization and mitigating unwanted influence for reducing model bias.
arXiv Detail & Related papers (2024-05-02T22:37:38Z) - AI Does Not Alter Perceptions of Text Messages [0.0]
Large language models (LLMs) may prove to be the perfect tool to assist users that would otherwise find texting difficult or stressful.
Poor public sentiment regarding AI introduces the possibility that its usage may harm perceptions of AI-assisted text messages.
This study examines how the belief that a text message did or did not receive AI assistance in composition alters its perceived tone, clarity, and ability to convey intent.
arXiv Detail & Related papers (2024-01-27T14:32:12Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Editing Personality for Large Language Models [73.59001811199823]
This paper introduces an innovative task focused on editing the personality traits of Large Language Models (LLMs)
We construct PersonalityEdit, a new benchmark dataset to address this task.
arXiv Detail & Related papers (2023-10-03T16:02:36Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - Readability Research: An Interdisciplinary Approach [62.03595526230364]
We aim to provide a firm foundation for readability research, a comprehensive framework for readability research.
Readability refers to aspects of visual information design which impact information flow from the page to the reader.
These aspects can be modified on-demand, instantly improving the ease with which a reader can process and derive meaning from text.
arXiv Detail & Related papers (2021-07-20T16:52:17Z) - Predicting Text Readability from Scrolling Interactions [6.530293714772306]
This paper investigates how scrolling behaviour relates to the readability of a text.
We make our dataset publicly available and show that there are statistically significant differences in the way readers interact with text depending on the text level.
arXiv Detail & Related papers (2021-05-13T15:27:00Z) - TextHide: Tackling Data Privacy in Language Understanding Tasks [54.11691303032022]
TextHide mitigates privacy risks without slowing down training or reducing accuracy.
It requires all participants to add a simple encryption step to prevent an eavesdropping attacker from recovering private text data.
We evaluate TextHide on the GLUE benchmark, and our experiments show that TextHide can effectively defend attacks on shared gradients or representations.
arXiv Detail & Related papers (2020-10-12T22:22:15Z) - Constructing a Testbed for Psychometric Natural Language Processing [0.5801044612920815]
We describe our efforts to construct a corpus for psychometric natural language processing (NLP)
We discuss our multi-step process to align user text with their survey-based response items.
We report preliminary results on the use of the text to categorize/predict users' survey response labels.
arXiv Detail & Related papers (2020-07-25T16:29:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.