ExploreSelf: Fostering User-driven Exploration and Reflection on Personal Challenges with Adaptive Guidance by Large Language Models
- URL: http://arxiv.org/abs/2409.09662v3
- Date: Wed, 05 Feb 2025 17:41:42 GMT
- Title: ExploreSelf: Fostering User-driven Exploration and Reflection on Personal Challenges with Adaptive Guidance by Large Language Models
- Authors: Inhwa Song, SoHyun Park, Sachin R. Pendse, Jessica Lee Schleider, Munmun De Choudhury, Young-Ho Kim,
- Abstract summary: Reflective prompts have been used to provide direction, and large language models (LLMs) have demonstrated the potential to provide tailored guidance.<n>We present ExploreSelf, an LLM-driven application designed to empower users to control their reflective journey.
- Score: 15.910884179120577
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Expressing stressful experiences in words is proven to improve mental and physical health, but individuals often disengage with writing interventions as they struggle to organize their thoughts and emotions. Reflective prompts have been used to provide direction, and large language models (LLMs) have demonstrated the potential to provide tailored guidance. However, current systems often limit users' flexibility to direct their reflections. We thus present ExploreSelf, an LLM-driven application designed to empower users to control their reflective journey, providing adaptive support through dynamically generated questions. Through an exploratory study with 19 participants, we examine how participants explore and reflect on personal challenges using ExploreSelf. Our findings demonstrate that participants valued the flexible navigation of adaptive guidance to control their reflective journey, leading to deeper engagement and insight. Building on our findings, we discuss the implications of designing LLM-driven tools that facilitate user-driven and effective reflection of personal challenges.
Related papers
- Understanding Learner-LLM Chatbot Interactions and the Impact of Prompting Guidelines [9.834055425277874]
This study investigates learner-AI interactions through an educational experiment in which participants receive structured guidance on effective prompting.
To assess user behavior and prompting efficacy, we analyze a dataset of 642 interactions from 107 users.
Our findings provide a deeper understanding of how users engage with Large Language Models and the role of structured prompting guidance in enhancing AI-assisted communication.
arXiv Detail & Related papers (2025-04-10T15:20:43Z) - Exploring the Impact of Personality Traits on Conversational Recommender Systems: A Simulation with Large Language Models [70.180385882195]
This paper introduces a personality-aware user simulation for Conversational Recommender Systems (CRSs)
The user agent induces customizable personality traits and preferences, while the system agent possesses the persuasion capability to simulate realistic interaction in CRSs.
Experimental results demonstrate that state-of-the-art LLMs can effectively generate diverse user responses aligned with specified personality traits.
arXiv Detail & Related papers (2025-04-09T13:21:17Z) - Meta-Reflection: A Feedback-Free Reflection Learning Framework [57.14485943991588]
We propose Meta-Reflection, a feedback-free reflection mechanism that requires only a single inference pass without external feedback.
Motivated by the human ability to remember and retrieve reflections from past experiences, Meta-Reflection integrates reflective insights into a codebook.
To thoroughly investigate and evaluate the practicality of Meta-Reflection in real-world scenarios, we introduce an industrial e-commerce benchmark named E-commerce Customer Intent Detection.
arXiv Detail & Related papers (2024-12-18T12:20:04Z) - From Laws to Motivation: Guiding Exploration through Law-Based Reasoning and Rewards [12.698095783768322]
Large Language Models (LLMs) and Reinforcement Learning (RL) are powerful approaches for building autonomous agents.
We propose a method that extracts experience from interaction records to model the underlying laws of the game environment.
arXiv Detail & Related papers (2024-11-24T15:57:53Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - On Targeted Manipulation and Deception when Optimizing LLMs for User Feedback [7.525470776920495]
Training to maximize human feedback creates a perverse incentive structure for the AI.
We find that extreme forms of "feedback gaming" such as manipulation and deception are learned reliably.
We hope our results can highlight the risks of using gameable feedback sources -- such as user feedback -- as a target for RL.
arXiv Detail & Related papers (2024-11-04T17:31:02Z) - Thinking LLMs: General Instruction Following with Thought Generation [56.30755438254918]
We propose a training method for equipping existing LLMs with such thinking abilities for general instruction following without use of additional human data.
For each instruction, the thought candidates are scored using a judge model to evaluate their responses only, and then optimized via preference optimization.
We show that this procedure leads to superior performance on AlpacaEval and Arena-Hard, and shows gains from thinking on non-reasoning categories such as marketing, health and general knowledge.
arXiv Detail & Related papers (2024-10-14T15:38:56Z) - PersonaFlow: Boosting Research Ideation with LLM-Simulated Expert Personas [12.593617990325528]
We introduce PersonaFlow, an LLM-based system using persona simulation to support research ideation.
Our findings indicate that using multiple personas during ideation significantly enhances user-perceived quality of outcomes.
Users' persona customization interactions significantly improved their sense of control and recall of generated ideas.
arXiv Detail & Related papers (2024-09-19T07:54:29Z) - Supporting Self-Reflection at Scale with Large Language Models: Insights from Randomized Field Experiments in Classrooms [7.550701021850185]
We investigate the potential of Large Language Models (LLMs) to help students engage in post-lesson reflection.
We conducted two randomized field experiments in undergraduate computer science courses.
arXiv Detail & Related papers (2024-06-01T02:41:59Z) - Towards Safety and Helpfulness Balanced Responses via Controllable Large Language Models [64.5204594279587]
A model that prioritizes safety will cause users to feel less engaged and assisted while prioritizing helpfulness will potentially cause harm.
We propose to balance safety and helpfulness in diverse use cases by controlling both attributes in large language models.
arXiv Detail & Related papers (2024-04-01T17:59:06Z) - Mirror: A Multiple-perspective Self-Reflection Method for Knowledge-rich Reasoning [18.5717357875955]
Large language models (LLMs) struggle with knowledge-rich problems without access to external resources.
We propose Mirror, a Multiple-perspective self-reflection method for knowledge-rich reasoning.
arXiv Detail & Related papers (2024-02-22T20:57:17Z) - Predicting challenge moments from students' discourse: A comparison of
GPT-4 to two traditional natural language processing approaches [0.3826704341650507]
This study investigates the potential of leveraging three distinct natural language processing models.
An expert knowledge rule-based model, a supervised machine learning (ML) model and a Large Language model (LLM) were investigated.
The results show that the supervised ML and the LLM approaches performed considerably well in both tasks.
arXiv Detail & Related papers (2024-01-03T11:54:30Z) - RELIC: Investigating Large Language Model Responses using Self-Consistency [58.63436505595177]
Large Language Models (LLMs) are notorious for blending fact with fiction and generating non-factual content, known as hallucinations.
We propose an interactive system that helps users gain insight into the reliability of the generated text.
arXiv Detail & Related papers (2023-11-28T14:55:52Z) - Automatically Correcting Large Language Models: Surveying the landscape
of diverse self-correction strategies [104.32199881187607]
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tasks.
A promising approach to rectify these flaws is self-correction, where the LLM itself is prompted or guided to fix problems in its own output.
This paper presents a comprehensive review of this emerging class of techniques.
arXiv Detail & Related papers (2023-08-06T18:38:52Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via
Relabeling Experience and Unsupervised Pre-training [94.87393610927812]
We present an off-policy, interactive reinforcement learning algorithm that capitalizes on the strengths of both feedback and off-policy learning.
We demonstrate that our approach is capable of learning tasks of higher complexity than previously considered by human-in-the-loop methods.
arXiv Detail & Related papers (2021-06-09T14:10:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.