Reading Users' Minds from What They Say: An Investigation into LLM-based Empathic Mental Inference
- URL: http://arxiv.org/abs/2403.13301v1
- Date: Wed, 20 Mar 2024 04:57:32 GMT
- Title: Reading Users' Minds from What They Say: An Investigation into LLM-based Empathic Mental Inference
- Authors: Qihao Zhu, Leah Chong, Maria Yang, Jianxi Luo,
- Abstract summary: In human-centered design, developing a comprehensive and in-depth understanding of user experiences is paramount.
accurately comprehending the real underlying mental states of a large human population remains a significant challenge today.
This paper investigates the use of Large Language Models (LLMs) for performing mental inference tasks.
- Score: 6.208698652041961
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In human-centered design, developing a comprehensive and in-depth understanding of user experiences, i.e., empathic understanding, is paramount for designing products that truly meet human needs. Nevertheless, accurately comprehending the real underlying mental states of a large human population remains a significant challenge today. This difficulty mainly arises from the trade-off between depth and scale of user experience research: gaining in-depth insights from a small group of users does not easily scale to a larger population, and vice versa. This paper investigates the use of Large Language Models (LLMs) for performing mental inference tasks, specifically inferring users' underlying goals and fundamental psychological needs (FPNs). Baseline and benchmark datasets were collected from human users and designers to develop an empathic accuracy metric for measuring the mental inference performance of LLMs. The empathic accuracy of inferring goals and FPNs of different LLMs with varied zero-shot prompt engineering techniques are experimented against that of human designers. Experimental results suggest that LLMs can infer and understand the underlying goals and FPNs of users with performance comparable to that of human designers, suggesting a promising avenue for enhancing the scalability of empathic design approaches through the integration of advanced artificial intelligence technologies. This work has the potential to significantly augment the toolkit available to designers during human-centered design, enabling the development of both large-scale and in-depth understanding of users' experiences.
Related papers
- Evaluating Cultural and Social Awareness of LLM Web Agents [113.49968423990616]
We introduce CASA, a benchmark designed to assess large language models' sensitivity to cultural and social norms.
Our approach evaluates LLM agents' ability to detect and appropriately respond to norm-violating user queries and observations.
Experiments show that current LLMs perform significantly better in non-agent environments.
arXiv Detail & Related papers (2024-10-30T17:35:44Z) - BIG5-CHAT: Shaping LLM Personalities Through Training on Human-Grounded Data [28.900987544062257]
We introduce BIG5-CHAT, a large-scale dataset containing 100,000 dialogues designed to ground models in how humans express their personality in text.
Our methods prompting outperform on personality assessments such as BFI and IPIP-NEO, with trait correlations more closely matching human data.
Our experiments reveal that models trained to exhibit higher conscientiousness, higher agreeableness, lower extraversion, and lower neuroticism display better performance on reasoning tasks.
arXiv Detail & Related papers (2024-10-21T20:32:27Z) - HERM: Benchmarking and Enhancing Multimodal LLMs for Human-Centric Understanding [68.4046326104724]
We introduce HERM-Bench, a benchmark for evaluating the human-centric understanding capabilities of MLLMs.
Our work reveals the limitations of existing MLLMs in understanding complex human-centric scenarios.
We present HERM-100K, a comprehensive dataset with multi-level human-centric annotations, aimed at enhancing MLLMs' training.
arXiv Detail & Related papers (2024-10-09T11:14:07Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - PersonaFlow: Boosting Research Ideation with LLM-Simulated Expert Personas [12.593617990325528]
We introduce PersonaFlow, an LLM-based system using persona simulation to support research ideation.
Our findings indicate that using multiple personas during ideation significantly enhances user-perceived quality of outcomes.
Users' persona customization interactions significantly improved their sense of control and recall of generated ideas.
arXiv Detail & Related papers (2024-09-19T07:54:29Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z) - What You Need is What You Get: Theory of Mind for an LLM-Based Code Understanding Assistant [0.0]
A growing number of tools have used Large Language Models (LLMs) to support developers' code understanding.
In this study, we designed an LLM-based conversational assistant that provides a personalized interaction based on inferred user mental state.
Our results provide insights for researchers and tool builders who want to create or improve LLM-based conversational assistants to support novices in code understanding.
arXiv Detail & Related papers (2024-08-08T14:08:15Z) - CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics [0.0]
Integrating cognitive ergonomics with LLMs is crucial for improving safety, reliability, and user satisfaction in human-AI interactions.
Current LLM designs often lack this integration, resulting in systems that may not fully align with human cognitive capabilities and limitations.
arXiv Detail & Related papers (2024-07-03T07:59:52Z) - Persona-DB: Efficient Large Language Model Personalization for Response Prediction with Collaborative Data Refinement [79.2400720115588]
We introduce Persona-DB, a simple yet effective framework consisting of a hierarchical construction process to improve generalization across task contexts.
In the evaluation of response prediction, Persona-DB demonstrates superior context efficiency in maintaining accuracy with a significantly reduced retrieval size.
Our experiments also indicate a marked improvement of over 10% under cold-start scenarios, when users have extremely sparse data.
arXiv Detail & Related papers (2024-02-16T20:20:43Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - User profile-driven large-scale multi-agent learning from demonstration
in federated human-robot collaborative environments [5.218882272051637]
This paper introduces a novel user profile formulation for providing a fine-grained representation of the exhibited human behavior.
The overall designed scheme enables both short- and long-term analysis/interpretation of the human behavior.
arXiv Detail & Related papers (2021-03-30T15:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.