Semiotic Reconstruction of Destination Expectation Constructs An LLM-Driven Computational Paradigm for Social Media Tourism Analytics
- URL: http://arxiv.org/abs/2505.16118v1
- Date: Thu, 22 May 2025 01:52:01 GMT
- Title: Semiotic Reconstruction of Destination Expectation Constructs An LLM-Driven Computational Paradigm for Social Media Tourism Analytics
- Authors: Haotian Lan, Yao Gao, Yujun Cheng, Wei Yuan, Kun Wang,
- Abstract summary: Social media's rise establishes user-generated content (UGC) as pivotal for travel decisions.<n>This study introduces a dual-method LLM framework: unsupervised expectation extraction paired with survey-informed fine-tuning.<n>Findings reveal leisure/social expectations drive engagement more than foundational natural/emotional factors.
- Score: 10.646175272534082
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social media's rise establishes user-generated content (UGC) as pivotal for travel decisions, yet analytical methods lack scalability. This study introduces a dual-method LLM framework: unsupervised expectation extraction from UGC paired with survey-informed supervised fine-tuning. Findings reveal leisure/social expectations drive engagement more than foundational natural/emotional factors. By establishing LLMs as precision tools for expectation quantification, we advance tourism analytics methodology and propose targeted strategies for experience personalization and social travel promotion. The framework's adaptability extends to consumer behavior research, demonstrating computational social science's transformative potential in marketing optimization.
Related papers
- Aligning LLM with human travel choices: a persona-based embedding learning approach [15.11130742093296]
This paper introduces a novel framework for aligning large language models with human travel choice behavior.<n>Our framework uses a persona inference and loading process to condition LLMs with suitable prompts to enhance alignment.
arXiv Detail & Related papers (2025-05-25T06:54:01Z) - Where You Go is Who You Are: Behavioral Theory-Guided LLMs for Inverse Reinforcement Learning [4.345382237366071]
This study introduces SILIC, short for Sociodemographic Inference with LLM-guided Inverse Reinforcement Learning (IRL) and Cognitive Chain Reasoning ( CCR)<n> CCR infers sociodemographic attributes from observed mobility patterns by capturing latent behavioral intentions and reasoning through psychological constructs.<n>Our method substantially outperforms state-of-the-art baselines and shows great promise for enriching big trajectory data to support behaviorally grounded applications in transportation planning and beyond.
arXiv Detail & Related papers (2025-05-22T19:56:03Z) - SCRAG: Social Computing-Based Retrieval Augmented Generation for Community Response Forecasting in Social Media Environments [8.743208265682014]
SCRAG is a prediction framework inspired by social computing.<n>It forecast community responses to real or hypothetical social media posts.<n>It can be used by public relations specialists to craft messaging in ways that avoid unintended misinterpretations.
arXiv Detail & Related papers (2025-04-18T15:02:31Z) - FamilyTool: A Multi-hop Personalized Tool Use Benchmark [94.1158032740113]
We introduce FamilyTool, a novel benchmark grounded in a family-based knowledge graph (KG)<n>FamilyTool challenges Large Language Models with queries spanning 1 to 3 relational hops.<n>Experiments reveal significant performance gaps in state-of-the-art LLMs.
arXiv Detail & Related papers (2025-04-09T10:42:36Z) - A Survey of Scaling in Large Language Model Reasoning [62.92861523305361]
We provide a comprehensive examination of scaling in large Language models (LLMs) reasoning.<n>We analyze scaling in reasoning steps that improves multi-step inference and logical consistency.<n>We discuss scaling in training-enabled reasoning, focusing on optimization through iterative model improvement.
arXiv Detail & Related papers (2025-04-02T23:51:27Z) - ReMA: Learning to Meta-think for LLMs with Multi-Agent Reinforcement Learning [53.817538122688944]
We introduce Reinforced Meta-thinking Agents (ReMA) to elicit meta-thinking behaviors from Reasoning of Large Language Models (LLMs)<n>ReMA decouples the reasoning process into two hierarchical agents: a high-level meta-thinking agent responsible for generating strategic oversight and plans, and a low-level reasoning agent for detailed executions.<n> Empirical results from single-turn experiments demonstrate that ReMA outperforms single-agent RL baselines on complex reasoning tasks.
arXiv Detail & Related papers (2025-03-12T16:05:31Z) - Can LLMs Simulate Social Media Engagement? A Study on Action-Guided Response Generation [51.44040615856536]
This paper analyzes large language models' ability to simulate social media engagement through action guided response generation.<n>We benchmark GPT-4o-mini, O1-mini, and DeepSeek-R1 in social media engagement simulation regarding a major societal event.
arXiv Detail & Related papers (2025-02-17T17:43:08Z) - Build An Influential Bot In Social Media Simulations With Large Language Models [7.242974711907219]
This study introduces a novel simulated environment that combines Agent-Based Modeling (ABM) with Large Language Models (LLMs)<n>We present an innovative application of Reinforcement Learning (RL) to replicate the process of opinion leader formation.<n>Our findings reveal that limiting the action space and incorporating self-observation are key factors for achieving stable opinion leader generation.
arXiv Detail & Related papers (2024-11-29T11:37:12Z) - Evaluating Cultural and Social Awareness of LLM Web Agents [113.49968423990616]
We introduce CASA, a benchmark designed to assess large language models' sensitivity to cultural and social norms.<n>Our approach evaluates LLM agents' ability to detect and appropriately respond to norm-violating user queries and observations.<n>Experiments show that current LLMs perform significantly better in non-agent environments.
arXiv Detail & Related papers (2024-10-30T17:35:44Z) - Chain-of-Thought Prompting for Demographic Inference with Large Multimodal Models [58.58594658683919]
Large multimodal models (LMMs) have shown transformative potential across various research tasks.
Our findings indicate LMMs possess advantages in zero-shot learning, interpretability, and handling uncurated 'in-the-wild' inputs.
We propose a Chain-of-Thought augmented prompting approach, which effectively mitigates the off-target prediction issue.
arXiv Detail & Related papers (2024-05-24T16:26:56Z) - Do LLM Agents Exhibit Social Behavior? [5.094340963261968]
State-Understanding-Value-Action (SUVA) is a framework to systematically analyze responses in social contexts.
It assesses social behavior through both their final decisions and the response generation processes leading to those decisions.
We demonstrate that utterance-based reasoning reliably predicts LLMs' final actions.
arXiv Detail & Related papers (2023-12-23T08:46:53Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - SurveyLM: A platform to explore emerging value perspectives in augmented
language models' behaviors [0.4724825031148411]
This white paper presents our work on SurveyLM, a platform for analyzing augmented language models' (ALMs) emergent alignment behaviors.
We apply survey and experimental methodologies, traditionally used in studying social behaviors, to evaluate ALMs systematically.
We aim to shed light on factors influencing ALMs' emergent behaviors, facilitate their alignment with human intentions and expectations, and thereby contributed to the responsible development and deployment of advanced social AI systems.
arXiv Detail & Related papers (2023-08-01T12:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.