AI Does Not Alter Perceptions of Text Messages
- URL: http://arxiv.org/abs/2402.01726v2
- Date: Wed, 7 Feb 2024 17:04:31 GMT
- Title: AI Does Not Alter Perceptions of Text Messages
- Authors: N'yoma Diamond
- Abstract summary: Large language models (LLMs) may prove to be the perfect tool to assist users that would otherwise find texting difficult or stressful.
Poor public sentiment regarding AI introduces the possibility that its usage may harm perceptions of AI-assisted text messages.
This study examines how the belief that a text message did or did not receive AI assistance in composition alters its perceived tone, clarity, and ability to convey intent.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For many people, anxiety, depression, and other social and mental factors can
make composing text messages an active challenge. To remedy this problem, large
language models (LLMs) may yet prove to be the perfect tool to assist users
that would otherwise find texting difficult or stressful. However, despite
rapid uptake in LLM usage, considerations for their assistive usage in text
message composition have not been explored. A primary concern regarding LLM
usage is that poor public sentiment regarding AI introduces the possibility
that its usage may harm perceptions of AI-assisted text messages, making usage
counter-productive. To (in)validate this possibility, we explore how the belief
that a text message did or did not receive AI assistance in composition alters
its perceived tone, clarity, and ability to convey intent. In this study, we
survey the perceptions of 26 participants on 18 randomly labeled pre-composed
text messages. In analyzing the participants' ratings of message tone, clarity,
and ability to convey intent, we find that there is no statistically
significant evidence that the belief that AI is utilized alters recipient
perceptions. This provides hopeful evidence that LLM-based text message
composition assistance can be implemented without the risk of
counter-productive outcomes.
Related papers
- TwIPS: A Large Language Model Powered Texting Application to Simplify Conversational Nuances for Autistic Users [0.0]
Autistic individuals often experience difficulties in conveying and interpreting emotional tone and non-literal nuances.
We present TwIPS, a prototype texting application powered by a large language model (LLM)
We leverage an AI-based simulation and a conversational script to evaluate TwIPS with 8 autistic participants in an in-lab setting.
arXiv Detail & Related papers (2024-07-25T04:15:54Z) - Assessing AI vs Human-Authored Spear Phishing SMS Attacks: An Empirical Study Using the TRAPD Method [1.099532646524593]
This paper explores the rising concern of utilizing Large Language Models (LLMs) in spear phishing message generation.
Our pilot study compares the effectiveness of smishing (SMS phishing) messages created by GPT-4 and human authors, which have been personalized to willing targets.
arXiv Detail & Related papers (2024-06-18T20:47:16Z) - "I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust [51.542856739181474]
We show how different natural language expressions of uncertainty impact participants' reliance, trust, and overall task performance.
We find that first-person expressions decrease participants' confidence in the system and tendency to agree with the system's answers, while increasing participants' accuracy.
Our findings suggest that using natural language expressions of uncertainty may be an effective approach for reducing overreliance on LLMs, but that the precise language used matters.
arXiv Detail & Related papers (2024-05-01T16:43:55Z) - TEXT2TASTE: A Versatile Egocentric Vision System for Intelligent Reading Assistance Using Large Language Model [2.2469442203227863]
We propose an intelligent reading assistant based on smart glasses with embedded RGB cameras and a Large Language Model (LLM)
The video recorded from the egocentric perspective of a person wearing the glasses is processed to localise text information using object detection and optical character recognition methods.
The LLM processes the data and allows the user to interact with the text and responds to a given query, thus extending the functionality of corrective lenses.
arXiv Detail & Related papers (2024-04-14T13:39:02Z) - Comparing Large Language Model AI and Human-Generated Coaching Messages
for Behavioral Weight Loss [5.824523259910306]
Large language model (LLM) based artificial intelligence (AI) chatbots could offer more personalized and novel messages.
87 adults in a weight-loss trial rated ten coaching messages' helpfulness using a 5-point Likert scale.
arXiv Detail & Related papers (2023-12-07T05:45:24Z) - Towards Possibilities & Impossibilities of AI-generated Text Detection:
A Survey [97.33926242130732]
Large Language Models (LLMs) have revolutionized the domain of natural language processing (NLP) with remarkable capabilities of generating human-like text responses.
Despite these advancements, several works in the existing literature have raised serious concerns about the potential misuse of LLMs.
To address these concerns, a consensus among the research community is to develop algorithmic solutions to detect AI-generated text.
arXiv Detail & Related papers (2023-10-23T18:11:32Z) - PromptRobust: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts [76.18347405302728]
This study uses a plethora of adversarial textual attacks targeting prompts across multiple levels: character, word, sentence, and semantic.
The adversarial prompts are then employed in diverse tasks including sentiment analysis, natural language inference, reading comprehension, machine translation, and math problem-solving.
Our findings demonstrate that contemporary Large Language Models are not robust to adversarial prompts.
arXiv Detail & Related papers (2023-06-07T15:37:00Z) - ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery [52.73936514734762]
ChatGPT has gained popularity for its ability to generate human-like dialogue.
Data-sensitive domains face challenges in using ChatGPT due to privacy and data-ownership concerns.
We propose a text ambiguation framework that preserves user privacy.
arXiv Detail & Related papers (2023-05-19T02:09:52Z) - Can AI-Generated Text be Reliably Detected? [54.670136179857344]
Unregulated use of LLMs can potentially lead to malicious consequences such as plagiarism, generating fake news, spamming, etc.
Recent works attempt to tackle this problem either using certain model signatures present in the generated text outputs or by applying watermarking techniques.
In this paper, we show that these detectors are not reliable in practical scenarios.
arXiv Detail & Related papers (2023-03-17T17:53:19Z) - How are you? Introducing stress-based text tailoring [63.128912221732946]
We discuss customising texts based on user stress level, as it could represent a critical factor when it comes to user engagement and behavioural change.
We first show a real-world example in which user behaviour is influenced by stress, then, after discussing which tools can be employed to assess and measure it, we propose an initial method for tailoring the document.
arXiv Detail & Related papers (2020-07-20T09:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.