Token-Importance Guided Direct Preference Optimization
- URL: http://arxiv.org/abs/2505.19653v1
- Date: Mon, 26 May 2025 08:11:24 GMT
- Title: Token-Importance Guided Direct Preference Optimization
- Authors: Yang Ning, Lin Hai, Liu Yibo, Tian Baoliang, Liu Guoqing, Zhang Haijun,
- Abstract summary: We propose a Token-Importance Guided Direct Preference Optimization (TI-DPO) to ensure that large language models generate outputs aligned with human preferences.<n> Experimental results show that TI-DPO achieves higher accuracy and stronger generative diversity, providing more stable and computationally efficient solutions.
- Score: 2.230951739798399
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensuring that large language models (LLMs) generate outputs aligned with human preferences is important for safe and effective AI interactions. While Direct Preference Optimization (DPO) employs an implicit reward function to optimize the policy model, however, it and its related variants overlook the differential importance of individual tokens and are sensitive to judgment noise in preference datasets during generation. Although recent methods attempt to assess the important weight of tokens via probability prediction or simplistic weighting schemes, these evaluation methods are prone to biases and still cannot fully address these issues. To solve this problem, we propose the Token-Importance Guided Direct Preference Optimization (TI-DPO), which introduces two key innovations: the gradient-based token-importance weights that dynamically prioritize critical tokens, and a triple loss that explicitly guides model outputs to approach human-preferred responses and stay away from non-preferred responses. Experimental results show that TI-DPO achieves higher accuracy and stronger generative diversity, providing more stable and computationally efficient solutions compared with DPO and other RLHF methods.
Related papers
- ConfPO: Exploiting Policy Model Confidence for Critical Token Selection in Preference Optimization [48.50761200321113]
We introduce ConfPO, a method for preference learning in Large Language Models (LLMs)<n>It identifies and optimize preference-critical tokens based solely on the training policy's confidence, without requiring any auxiliary models or compute.<n> Experimental results on challenging alignment benchmarks, including AlpacaEval 2 and Arena-Hard, demonstrate that ConfPO consistently outperforms uniform DAAs.
arXiv Detail & Related papers (2025-06-10T11:54:22Z) - Mitigating Reward Over-optimization in Direct Alignment Algorithms with Importance Sampling [13.917799959981185]
Direct Alignment Algorithms (DAAs) have emerged as alternatives to the standard Reinforcement Learning from Human Feedback (RLHF)<n>These methods are more susceptible to over-optimization, in which the model drifts away from the reference policy, leading to degraded performance as training progresses.<n>This paper proposes a novel importance-sampling approach to mitigate the over-optimization problem of offline DAAs.
arXiv Detail & Related papers (2025-06-10T10:45:26Z) - Optimal Transport-Based Token Weighting scheme for Enhanced Preference Optimization [17.801062522027266]
Direct Preference Optimization (DPO) has emerged as a promising framework for aligning Large Language Models with human preferences.<n>Existing methods assign equal importance to all tokens in the response, while humans focus on more meaningful parts.<n>We propose textbfOptimal textbfTransport-based token weighting scheme for enhancing direct textbfPreference textbfOptimization (OTPO)
arXiv Detail & Related papers (2025-05-24T14:44:15Z) - Sharpe Ratio-Guided Active Learning for Preference Optimization in RLHF [67.48004037550064]
We propose an active learning approach to efficiently select prompt and preference pairs.<n>Our method evaluates the gradients of all potential preference annotations to assess their impact on model updates.<n> Experimental results demonstrate that our method outperforms the baseline by up to 5% in win rates against the chosen completion.
arXiv Detail & Related papers (2025-03-28T04:22:53Z) - Uncertainty-Penalized Direct Preference Optimization [52.387088396044206]
We develop a pessimistic framework for DPO by introducing preference uncertainty penalization schemes.
The penalization serves as a correction to the loss which attenuates the loss gradient for uncertain samples.
We show improved overall performance compared to vanilla DPO, as well as better completions on prompts from high-uncertainty chosen/rejected responses.
arXiv Detail & Related papers (2024-10-26T14:24:37Z) - TIS-DPO: Token-level Importance Sampling for Direct Preference Optimization With Estimated Weights [73.9088920210495]
We propose a token-level importance sampling DPO objective named TIS-DPO that assigns importance weights to each token based on its reward.<n>TIS-DPO significantly outperforms various baseline methods on harmlessness and helpfulness alignment and summarization tasks.
arXiv Detail & Related papers (2024-10-06T04:03:00Z) - Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization [75.1240295759264]
We propose an effective framework for Bridging and Modeling Correlations in pairwise data, named BMC.<n>We increase the consistency and informativeness of the pairwise preference signals through targeted modifications.<n>We identify that DPO alone is insufficient to model these correlations and capture nuanced variations.
arXiv Detail & Related papers (2024-08-14T11:29:47Z) - Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.