Systematic Characterization of the Effectiveness of Alignment in Large Language Models for Categorical Decisions
- URL: http://arxiv.org/abs/2409.18995v1
- Date: Wed, 18 Sep 2024 19:03:04 GMT
- Title: Systematic Characterization of the Effectiveness of Alignment in Large Language Models for Categorical Decisions
- Authors: Isaac Kohane,
- Abstract summary: This paper applies a systematic methodology for evaluating preference alignment in large language models (LLMs) in categorical decision-making with medical triage.
It also measures how effectively an alignment procedure will change the alignment of a specific model.
The results reveal significant variability in alignment effectiveness across models and alignment approaches.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: As large language models (LLMs) are deployed in high-stakes domains like healthcare, understanding how well their decision-making aligns with human preferences and values becomes crucial, especially when we recognize that there is no single gold standard for these preferences. This paper applies a systematic methodology for evaluating preference alignment in LLMs on categorical decision-making with medical triage as a domain-specific use case. It also measures how effectively an alignment procedure will change the alignment of a specific model. Key to this methodology is a novel simple measure, the Alignment Compliance Index (ACI), that quantifies how effectively a LLM can be aligned to a given preference function or gold standard. Since the ACI measures the effect rather than the process of alignment, it is applicable to alignment methods beyond the in-context learning used in this study. Using a dataset of simulated patient pairs, three frontier LLMs (GPT4o, Claude 3.5 Sonnet, and Gemini Advanced) were assessed on their ability to make triage decisions consistent with an expert clinician's preferences. The models' performance before and after alignment attempts was evaluated using various prompting strategies. The results reveal significant variability in alignment effectiveness across models and alignment approaches. Notably, models that performed well, as measured by ACI, pre-alignment sometimes degraded post-alignment, and small changes in the target preference function led to large shifts in model rankings. The implicit ethical principles, as understood by humans, underlying the LLMs' decisions were also explored through targeted questioning. This study motivates the use of a practical set of methods and the ACI, in the near term, to understand the correspondence between the variety of human and LLM decision-making values in categorical decision-making such as triage.
Related papers
- Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments [50.310636905746975]
Real-world machine learning systems often encounter model performance degradation due to distributional shifts in the underlying data generating process.
Existing approaches to addressing shifts, such as concept drift adaptation, are limited by their reason-agnostic nature.
We propose self-healing machine learning (SHML) to overcome these limitations.
arXiv Detail & Related papers (2024-10-31T20:05:51Z) - Diverging Preferences: When do Annotators Disagree and do Models Know? [92.24651142187989]
We develop a taxonomy of disagreement sources spanning 10 categories across four high-level classes.
We find that the majority of disagreements are in opposition with standard reward modeling approaches.
We develop methods for identifying diverging preferences to mitigate their influence on evaluation and training.
arXiv Detail & Related papers (2024-10-18T17:32:22Z) - Aligning (Medical) LLMs for (Counterfactual) Fairness [2.089191490381739]
Large Language Models (LLMs) have emerged as promising solutions for medical and clinical decision support applications.
LLMs are subject to different types of biases, which can lead to unfair treatment of individuals, worsening health disparities, and reducing trust in AI-augmented medical tools.
We present a new model alignment approach for aligning LLMs using a preference optimization method within a knowledge distillation framework.
arXiv Detail & Related papers (2024-08-22T01:11:27Z) - Evaluating Human Alignment and Model Faithfulness of LLM Rationale [66.75309523854476]
We study how well large language models (LLMs) explain their generations through rationales.
We show that prompting-based methods are less "faithful" than attribution-based explanations.
arXiv Detail & Related papers (2024-06-28T20:06:30Z) - Fairer Preferences Elicit Improved Human-Aligned Large Language Model Judgments [41.25558612970942]
We show that large language models (LLMs) exhibit preference biases and worrying sensitivity to prompt designs.
Motivated by this phenomenon, we propose an automatic Zero-shot Evaluation-oriented Prompt Optimization framework, ZEPO.
arXiv Detail & Related papers (2024-06-17T09:48:53Z) - Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data [102.16105233826917]
Learning from preference labels plays a crucial role in fine-tuning large language models.
There are several distinct approaches for preference fine-tuning, including supervised learning, on-policy reinforcement learning (RL), and contrastive learning.
arXiv Detail & Related papers (2024-04-22T17:20:18Z) - PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models [1.1704154007740835]
This work investigates the impact of fine-tuning and data selection on economic and political biases in Large Language Models (LLMs)
We introduce PoliTune, a fine-tuning methodology to explore the systematic aspects of aligning LLMs with specific ideologies.
We introduce a systematic method for using the open-source Llama3-70B for dataset selection, annotation, and synthesizing a preferences dataset for Direct Preference Optimization (DPO) to align the model with a given political ideology.
arXiv Detail & Related papers (2024-04-10T16:30:09Z) - Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment [103.12563033438715]
Alignment in artificial intelligence pursues consistency between model responses and human preferences as well as values.
Existing alignment techniques are mostly unidirectional, leading to suboptimal trade-offs and poor flexibility over various objectives.
We introduce controllable preference optimization (CPO), which explicitly specifies preference scores for different objectives.
arXiv Detail & Related papers (2024-02-29T12:12:30Z) - Improving the compromise between accuracy, interpretability and
personalization of rule-based machine learning in medical problems [0.08594140167290096]
We introduce a new component to predict if a given rule will be correct or not for a particular patient, which introduces personalization into the procedure.
The validation results using three public clinical datasets show that it also allows to increase the predictive performance of the selected set of rules.
arXiv Detail & Related papers (2021-06-15T01:19:04Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.