Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text
- URL: http://arxiv.org/abs/2411.07451v1
- Date: Tue, 12 Nov 2024 00:24:31 GMT
- Title: Optimizing Data Delivery: Insights from User Preferences on Visuals, Tables, and Text
- Authors: Reuben Luera, Ryan Rossi, Franck Dernoncourt, Alexa Siu, Sungchul Kim, Tong Yu, Ruiyi Zhang, Xiang Chen, Nedim Lipka, Zhehao Zhang, Seon Gyeom Kim, Tak Yeon Lee,
- Abstract summary: We conduct a user study where users are shown a question and asked what they would prefer to see.
We use the data to establish that a user's personal traits does influence the data outputs that they prefer.
- Score: 59.68239795065175
- License:
- Abstract: In this work, we research user preferences to see a chart, table, or text given a question asked by the user. This enables us to understand when it is best to show a chart, table, or text to the user for the specific question. For this, we conduct a user study where users are shown a question and asked what they would prefer to see and used the data to establish that a user's personal traits does influence the data outputs that they prefer. Understanding how user characteristics impact a user's preferences is critical to creating data tools with a better user experience. Additionally, we investigate to what degree an LLM can be used to replicate a user's preference with and without user preference data. Overall, these findings have significant implications pertaining to the development of data tools and the replication of human preferences using LLMs. Furthermore, this work demonstrates the potential use of LLMs to replicate user preference data which has major implications for future user modeling and personalization research.
Related papers
- Aligning LLMs with Individual Preferences via Interaction [51.72200436159636]
We train large language models (LLMs) that can ''interact to align''
We develop a multi-turn preference dataset containing 3K+ multi-turn conversations in tree structures.
For evaluation, we establish the ALOE benchmark, consisting of 100 carefully selected examples and well-designed metrics to measure the customized alignment performance during conversations.
arXiv Detail & Related papers (2024-10-04T17:48:29Z) - PersonalLLM: Tailoring LLMs to Individual Preferences [11.717169516971856]
We present a public benchmark, PersonalLLM, focusing on adapting LLMs to provide maximal benefits for a particular user.
We curate open-ended prompts paired with many high-quality answers over which users would be expected to display heterogeneous latent preferences.
Our dataset and generated personalities offer an innovative testbed for developing personalization algorithms.
arXiv Detail & Related papers (2024-09-30T13:55:42Z) - Understanding the Role of User Profile in the Personalization of Large Language Models [19.74964898049076]
This study first confirms that the effectiveness of user profiles is primarily due to personalization information rather than semantic information.
Within the user profile, it is the historical personalized response produced or approved by users that plays a pivotal role in personalizing LLMs.
Our findings reveal the role of user profiles for the personalization of LLMs, and showcase how incorporating user profiles impacts performance.
arXiv Detail & Related papers (2024-06-22T14:32:35Z) - Step-Back Profiling: Distilling User History for Personalized Scientific Writing [50.481041470669766]
Large language models (LLM) excel at a variety of natural language processing tasks, yet they struggle to generate personalized content for individuals.
We introduce STEP-BACK PROFILING to personalize LLMs by distilling user history into concise profiles.
Our approach outperforms the baselines by up to 3.6 points on the general personalization benchmark.
arXiv Detail & Related papers (2024-06-20T12:58:26Z) - Aligning LLM Agents by Learning Latent Preference from User Edits [23.235995078727658]
We study interactive learning of language agents based on user edits made to the agent's output.
We propose a learning framework, PRELUDE, that infers a description of the user's latent preference based on historic edit data.
We introduce two interactive environments -- summarization and email writing, and use a GPT-4 simulated user for evaluation.
arXiv Detail & Related papers (2024-04-23T17:57:47Z) - Persona-DB: Efficient Large Language Model Personalization for Response Prediction with Collaborative Data Refinement [79.2400720115588]
We introduce Persona-DB, a simple yet effective framework consisting of a hierarchical construction process to improve generalization across task contexts.
In the evaluation of response prediction, Persona-DB demonstrates superior context efficiency in maintaining accuracy with a significantly reduced retrieval size.
Our experiments also indicate a marked improvement of over 10% under cold-start scenarios, when users have extremely sparse data.
arXiv Detail & Related papers (2024-02-16T20:20:43Z) - Explainable Active Learning for Preference Elicitation [0.0]
We employ Active Learning (AL) to solve the addressed problem with the objective of maximizing information acquisition with minimal user effort.
AL operates for selecting informative data from a large unlabeled set to inquire an oracle to label them.
It harvests user feedback (given for the system's explanations on the presented items) over informative samples to update an underlying machine learning (ML) model.
arXiv Detail & Related papers (2023-09-01T09:22:33Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Presentation of a Recommender System with Ensemble Learning and Graph
Embedding: A Case on MovieLens [3.8848561367220276]
Group classification and the ensemble learning technique were used for increasing prediction accuracy in recommender systems.
This study was performed on the MovieLens datasets, and the obtained results indicated the high efficiency of the presented method.
arXiv Detail & Related papers (2020-07-15T12:52:15Z) - MetaSelector: Meta-Learning for Recommendation with User-Level Adaptive
Model Selection [110.87712780017819]
We propose a meta-learning framework to facilitate user-level adaptive model selection in recommender systems.
We conduct experiments on two public datasets and a real-world production dataset.
arXiv Detail & Related papers (2020-01-22T16:05:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.