PREF: Reference-Free Evaluation of Personalised Text Generation in LLMs
- URL: http://arxiv.org/abs/2508.10028v1
- Date: Fri, 08 Aug 2025 14:32:31 GMT
- Title: PREF: Reference-Free Evaluation of Personalised Text Generation in LLMs
- Authors: Xiao Fu, Hossein A. Rahmani, Bin Wu, Jerome Ramos, Emine Yilmaz, Aldo Lipani,
- Abstract summary: Personalised text generation is essential for user-centric information systems.<n>We introduce textbfPREF, a textbfPersonalised textbfReference-free textbfEvaluation textbfFramework.
- Score: 32.27940625341602
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalised text generation is essential for user-centric information systems, yet most evaluation methods overlook the individuality of users. We introduce \textbf{PREF}, a \textbf{P}ersonalised \textbf{R}eference-free \textbf{E}valuation \textbf{F}ramework that jointly measures general output quality and user-specific alignment without requiring gold personalised references. PREF operates in a three-step pipeline: (1) a coverage stage uses a large language model (LLM) to generate a comprehensive, query-specific guideline covering universal criteria such as factuality, coherence, and completeness; (2) a preference stage re-ranks and selectively augments these factors using the target user's profile, stated or inferred preferences, and context, producing a personalised evaluation rubric; and (3) a scoring stage applies an LLM judge to rate candidate answers against this rubric, ensuring baseline adequacy while capturing subjective priorities. This separation of coverage from preference improves robustness, transparency, and reusability, and allows smaller models to approximate the personalised quality of larger ones. Experiments on the PrefEval benchmark, including implicit preference-following tasks, show that PREF achieves higher accuracy, better calibration, and closer alignment with human judgments than strong baselines. By enabling scalable, interpretable, and user-aligned evaluation, PREF lays the groundwork for more reliable assessment and development of personalised language generation systems.
Related papers
- P-GenRM: Personalized Generative Reward Model with Test-time User-based Scaling [66.55381105691818]
We propose P-GenRM, the first Personalized Generative Reward Model with test-time user-based scaling.<n>P-GenRM transforms preference signals into structured evaluation chains that derive adaptive personas and scoring rubrics.<n>It further clusters users into User Prototypes and introduces a dual-granularity scaling mechanism.
arXiv Detail & Related papers (2026-02-12T16:07:22Z) - PersoDPO: Scalable Preference Optimization for Instruction-Adherent, Persona-Grounded Dialogue via Multi-LLM Evaluation [20.228114552545772]
PersoDPO is a scalable preference optimisation framework.<n>It integrates evaluation metrics targeting coherence and personalization, along with a length-format compliance feature.<n>Experiments on the FoCus dataset show that an open-source language model fine-tuned with the PersoDPO framework consistently outperforms strong open-source baselines.
arXiv Detail & Related papers (2026-02-04T12:34:55Z) - PreferThinker: Reasoning-based Personalized Image Preference Assessment [83.66114370585976]
We propose a reasoning-based personalized image preference assessment framework.<n>It first predicts a user's preference profile from reference images.<n>It then provides interpretable, multi-dimensional scores and assessments of candidate images.
arXiv Detail & Related papers (2025-11-01T16:19:51Z) - POPI: Personalizing LLMs via Optimized Natural Language Preference Inference [42.25870704040321]
POPI is a general framework that introduces a preference inference model to distill heterogeneous user signals into concise natural language summaries.<n>These summaries act as transparent, compact, and transferable personalization representations that condition a shared generation model to produce personalized responses.<n>Extensive experiments across four personalization benchmarks demonstrate that POPI consistently improves personalization accuracy while reducing context overhead by a large margin.
arXiv Detail & Related papers (2025-10-17T23:07:57Z) - Personalized Reasoning: Just-In-Time Personalization and Why LLMs Fail At It [81.50711040539566]
Current large language model (LLM) development treats task-solving and preference alignment as separate challenges.<n>We introduce PREFDISCO, an evaluation methodology that transforms static benchmarks into interactive personalization tasks.<n>Our framework creates scenarios where identical questions require different reasoning chains depending on user context.
arXiv Detail & Related papers (2025-09-30T18:55:28Z) - PREFINE: Personalized Story Generation via Simulated User Critics and User-Specific Rubric Generation [2.8324853634693614]
PREFINE is a novel framework that extends the Critique-and-Refine paradigm to personalization.<n> PREFINE constructs a pseudo-user agent from a user's interaction history and generates user-specific rubrics.<n>Our approach holds potential for enabling efficient personalization in broader applications, such as dialogue systems, education, and recommendation.
arXiv Detail & Related papers (2025-09-16T16:39:40Z) - LOTUS: A Leaderboard for Detailed Image Captioning from Quality to Societal Bias and User Preferences [91.13704541413551]
LOTUS is a leaderboard for evaluating detailed captions.<n>It comprehensively evaluates various aspects, including caption quality.<n>It enables preference-oriented evaluations by tailoring criteria to diverse user preferences.
arXiv Detail & Related papers (2025-07-25T15:12:42Z) - From 1,000,000 Users to Every User: Scaling Up Personalized Preference for User-level Alignment [41.96246165999026]
Large language models (LLMs) have traditionally been aligned through one-size-fits-all approaches.<n>This paper introduces a comprehensive framework for scalable personalized alignment of LLMs.
arXiv Detail & Related papers (2025-03-19T17:41:46Z) - Review-driven Personalized Preference Reasoning with Large Language Models for Recommendation [21.769969074938142]
EXP3RT is a novel LLM-based recommender designed to leverage rich preference information contained in user and item reviews.<n>It generates detailed step-by-step reasoning followed by predicted rating.<n>Experiments show that EXP3RT outperforms existing methods on both rating prediction and candidate item reranking for top-k recommendation.
arXiv Detail & Related papers (2024-08-12T16:39:03Z) - PROXYQA: An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models [72.57329554067195]
ProxyQA is an innovative framework dedicated to assessing longtext generation.
It comprises in-depth human-curated meta-questions spanning various domains, each accompanied by specific proxy-questions with pre-annotated answers.
It assesses the generated content's quality through the evaluator's accuracy in addressing the proxy-questions.
arXiv Detail & Related papers (2024-01-26T18:12:25Z) - Self-Evaluation Improves Selective Generation in Large Language Models [54.003992911447696]
We reformulate open-ended generation tasks into token-level prediction tasks.
We instruct an LLM to self-evaluate its answers.
We benchmark a range of scoring methods based on self-evaluation.
arXiv Detail & Related papers (2023-12-14T19:09:22Z) - Automated Evaluation of Personalized Text Generation using Large
Language Models [38.2211640679274]
We present AuPEL, a novel evaluation method that distills three major semantic aspects of the generated text: personalization, quality and relevance, and automatically measures these aspects.
We find that, compared to existing evaluation metrics, AuPEL not only distinguishes and ranks models based on their personalization abilities more accurately, but also presents commendable consistency and efficiency for this task.
arXiv Detail & Related papers (2023-10-17T21:35:06Z) - Calibrating LLM-Based Evaluator [92.17397504834825]
We propose AutoCalibrate, a multi-stage, gradient-free approach to calibrate and align an LLM-based evaluator toward human preference.
Instead of explicitly modeling human preferences, we first implicitly encompass them within a set of human labels.
Our experiments on multiple text quality evaluation datasets illustrate a significant improvement in correlation with expert evaluation through calibration.
arXiv Detail & Related papers (2023-09-23T08:46:11Z) - FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets [69.91340332545094]
We introduce FLASK, a fine-grained evaluation protocol for both human-based and model-based evaluation.
We experimentally observe that the fine-graininess of evaluation is crucial for attaining a holistic view of model performance.
arXiv Detail & Related papers (2023-07-20T14:56:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.