Societal Alignment Frameworks Can Improve LLM Alignment
- URL: http://arxiv.org/abs/2503.00069v1
- Date: Thu, 27 Feb 2025 13:26:07 GMT
- Title: Societal Alignment Frameworks Can Improve LLM Alignment
- Authors: Karolina Stańczak, Nicholas Meade, Mehar Bhatia, Hattie Zhou, Konstantin Böttinger, Jeremy Barnes, Jason Stanley, Jessica Montgomery, Richard Zemel, Nicolas Papernot, Nicolas Chapados, Denis Therien, Timothy P. Lillicrap, Ana Marasović, Sylvie Delacroix, Gillian K. Hadfield, Siva Reddy,
- Abstract summary: We argue that improving LLM alignment requires incorporating insights from societal alignment frameworks.<n>We then investigate how uncertainty within societal alignment frameworks manifests in LLM alignment.<n>We end our discussion by offering an alternative view on LLM alignment, framing the underspecified nature of its objectives as an opportunity.
- Score: 50.97852062232431
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent progress in large language models (LLMs) has focused on producing responses that meet human expectations and align with shared values - a process coined alignment. However, aligning LLMs remains challenging due to the inherent disconnect between the complexity of human values and the narrow nature of the technological approaches designed to address them. Current alignment methods often lead to misspecified objectives, reflecting the broader issue of incomplete contracts, the impracticality of specifying a contract between a model developer, and the model that accounts for every scenario in LLM alignment. In this paper, we argue that improving LLM alignment requires incorporating insights from societal alignment frameworks, including social, economic, and contractual alignment, and discuss potential solutions drawn from these domains. Given the role of uncertainty within societal alignment frameworks, we then investigate how it manifests in LLM alignment. We end our discussion by offering an alternative view on LLM alignment, framing the underspecified nature of its objectives as an opportunity rather than perfect their specification. Beyond technical improvements in LLM alignment, we discuss the need for participatory alignment interface designs.
Related papers
- LLM Alignment as Retriever Optimization: An Information Retrieval Perspective [44.26715637344781]
Large Language Models (LLMs) have revolutionized artificial intelligence with capabilities in reasoning, coding, and communication.
Our work introduces a novel direct optimization approach for LLM alignment by drawing on established Information Retrieval (IR) principles.
Building on this foundation, we propose LLM Alignment as Retriever Preference Optimization (LarPO), a new alignment method that enhances overall alignment quality.
arXiv Detail & Related papers (2025-02-06T01:22:06Z) - Distributive Fairness in Large Language Models: Evaluating Alignment with Human Values [13.798198972161657]
A number of societal problems involve the distribution of resources, where fairness, along with economic efficiency, play a critical role in the desirability of outcomes.<n>This paper examines whether large language models (LLMs) adhere to fundamental fairness concepts and investigate their alignment with human preferences.
arXiv Detail & Related papers (2025-02-01T04:24:47Z) - Exploring Accuracy-Fairness Trade-off in Large Language Models [10.5817207739373]
We study the intricate challenge of harmonising accuracy and fairness in the enhancement of Large Language Models.
Overemphasising optimisation of one metric invariably leads to a significant degradation of the other.
Our investigation reveals that multi-objective evolutionary learning (MOEL) methodologies offer promising avenues for tackling this challenge.
arXiv Detail & Related papers (2024-11-21T04:40:35Z) - Dynamic Rewarding with Prompt Optimization Enables Tuning-free Self-Alignment of Language Models [54.381650481255235]
We introduce a new tuning-free approach for self-alignment, Dynamic Rewarding with Prompt Optimization (O)
Our approach leverages a search-based optimization framework that allows LLMs to iteratively self-improve and craft the optimal alignment instructions.
Empirical evaluations on eight recent LLMs, both open and closed-sourced, demonstrate that DRPO significantly enhances alignment performance.
arXiv Detail & Related papers (2024-11-13T16:15:38Z) - Combining Domain and Alignment Vectors to Achieve Better Knowledge-Safety Trade-offs in LLMs [64.83462841029089]
We introduce an efficient merging-based alignment method called textscMergeAlign that interpolates the domain and alignment vectors, creating safer domain-specific models.
We apply textscMergeAlign on Llama3 variants that are experts in medicine and finance, obtaining substantial alignment improvements with minimal to no degradation on domain-specific benchmarks.
arXiv Detail & Related papers (2024-11-11T09:32:20Z) - A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law [65.87885628115946]
Large language models (LLMs) are revolutionizing the landscapes of finance, healthcare, and law.
We highlight the instrumental role of LLMs in enhancing diagnostic and treatment methodologies in healthcare, innovating financial analytics, and refining legal interpretation and compliance strategies.
We critically examine the ethics for LLM applications in these fields, pointing out the existing ethical concerns and the need for transparent, fair, and robust AI systems.
arXiv Detail & Related papers (2024-05-02T22:43:02Z) - Aligners: Decoupling LLMs and Alignment [47.00002038331952]
Large Language Models (LLMs) need to be aligned with human expectations to ensure their safety and utility in most applications.
We propose to decouple LLMs and alignment by training aligner models that can be used to align any LLM for a given criteria on an as-needed basis.
arXiv Detail & Related papers (2024-03-07T04:54:56Z) - Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment [103.12563033438715]
Alignment in artificial intelligence pursues consistency between model responses and human preferences as well as values.
Existing alignment techniques are mostly unidirectional, leading to suboptimal trade-offs and poor flexibility over various objectives.
We introduce controllable preference optimization (CPO), which explicitly specifies preference scores for different objectives.
arXiv Detail & Related papers (2024-02-29T12:12:30Z) - LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks [18.068035947969044]
There is considerable confusion about the role of Large Language Models (LLMs) in planning and reasoning tasks.
We argue that auto-regressive LLMs cannot, by themselves, do planning or self-verification.
We present a vision of bf LLM-Modulo Frameworks that combine the strengths of LLMs with external model-based verifiers.
arXiv Detail & Related papers (2024-02-02T14:43:18Z) - Sparsity-Guided Holistic Explanation for LLMs with Interpretable
Inference-Time Intervention [53.896974148579346]
Large Language Models (LLMs) have achieved unprecedented breakthroughs in various natural language processing domains.
The enigmatic black-box'' nature of LLMs remains a significant challenge for interpretability, hampering transparent and accountable applications.
We propose a novel methodology anchored in sparsity-guided techniques, aiming to provide a holistic interpretation of LLMs.
arXiv Detail & Related papers (2023-12-22T19:55:58Z) - The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context
Learning [61.68787689234622]
A recent study, LIMA, shows that using merely 1K examples for alignment tuning can achieve significant alignment performance as well.
This raises questions about how exactly the alignment tuning transforms a base LLM.
We show that the gap between tuning-free and tuning-based alignment methods can be significantly reduced through strategic prompting.
arXiv Detail & Related papers (2023-12-04T00:46:11Z) - Large Language Model Alignment: A Survey [42.03229317132863]
The potential of large language models (LLMs) is undeniably vast; however, they may yield texts that are imprecise, misleading, or even detrimental.
This survey endeavors to furnish an extensive exploration of alignment methodologies designed for LLMs.
We also probe into salient issues including the models' interpretability, and potential vulnerabilities to adversarial attacks.
arXiv Detail & Related papers (2023-09-26T15:49:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.