Epistemic Alignment: A Mediating Framework for User-LLM Knowledge Delivery
- URL: http://arxiv.org/abs/2504.01205v1
- Date: Tue, 01 Apr 2025 21:38:12 GMT
- Title: Epistemic Alignment: A Mediating Framework for User-LLM Knowledge Delivery
- Authors: Nicholas Clark, Hua Shen, Bill Howe, Tanushree Mitra,
- Abstract summary: We propose a set of ten challenges in transmission of knowledge derived from the philosophical literature.<n>We find users develop workarounds to address each of the challenges.<n>For AI developers, the Epistemic Alignment Framework offers concrete guidance for supporting diverse approaches to knowledge.
- Score: 17.23286832909591
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: LLMs increasingly serve as tools for knowledge acquisition, yet users cannot effectively specify how they want information presented. When users request that LLMs "cite reputable sources," "express appropriate uncertainty," or "include multiple perspectives," they discover that current interfaces provide no structured way to articulate these preferences. The result is prompt sharing folklore: community-specific copied prompts passed through trust relationships rather than based on measured efficacy. We propose the Epistemic Alignment Framework, a set of ten challenges in knowledge transmission derived from the philosophical literature of epistemology, concerning issues such as evidence quality assessment and calibration of testimonial reliance. The framework serves as a structured intermediary between user needs and system capabilities, creating a common vocabulary to bridge the gap between what users want and what systems deliver. Through a thematic analysis of custom prompts and personalization strategies shared on online communities where these issues are actively discussed, we find users develop elaborate workarounds to address each of the challenges. We then apply our framework to two prominent model providers, OpenAI and Anthropic, through content analysis of their documented policies and product features. Our analysis shows that while these providers have partially addressed the challenges we identified, they fail to establish adequate mechanisms for specifying epistemic preferences, lack transparency about how preferences are implemented, and offer no verification tools to confirm whether preferences were followed. For AI developers, the Epistemic Alignment Framework offers concrete guidance for supporting diverse approaches to knowledge; for users, it works toward information delivery that aligns with their specific needs rather than defaulting to one-size-fits-all approaches.
Related papers
- CLEAR-KGQA: Clarification-Enhanced Ambiguity Resolution for Knowledge Graph Question Answering [13.624962763072899]
KGQA systems typically assume user queries are unambiguous, which is an assumption that rarely holds in real-world applications.
We propose a novel framework that dynamically handles both entity ambiguity (e.g., distinguishing between entities with similar names) and intent ambiguity (e.g., clarifying different interpretations of user queries) through interactive clarification.
arXiv Detail & Related papers (2025-04-13T17:34:35Z) - From Bugs to Benefits: Improving User Stories by Leveraging Crowd Knowledge with CrUISE-AC [0.0]
We present CrUISE-AC as a fully automated method that investigates issues and generates non-trivial additional acceptance criteria for a given user story.<n>Our evaluation shows that 80-82% of the generated acceptance criteria add relevant requirements to the user stories.
arXiv Detail & Related papers (2025-01-25T11:44:24Z) - Unveiling User Preferences: A Knowledge Graph and LLM-Driven Approach for Conversational Recommendation [55.5687800992432]
We propose a plug-and-play framework that synergizes Large Language Models (LLMs) and Knowledge Graphs (KGs) to unveil user preferences.<n>This enables the LLM to transform KG entities into concise natural language descriptions, allowing them to comprehend domain-specific knowledge.
arXiv Detail & Related papers (2024-11-16T11:47:21Z) - Do You Know What You Are Talking About? Characterizing Query-Knowledge Relevance For Reliable Retrieval Augmented Generation [19.543102037001134]
Language models (LMs) are known to suffer from hallucinations and misinformation.
Retrieval augmented generation (RAG) that retrieves verifiable information from an external knowledge corpus provides a tangible solution to these problems.
RAG generation quality is highly dependent on the relevance between a user's query and the retrieved documents.
arXiv Detail & Related papers (2024-10-10T19:14:55Z) - Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding [118.75567341513897]
Existing methods typically analyze target text in isolation or solely with non-member contexts.<n>We propose Con-ReCall, a novel approach that leverages the asymmetric distributional shifts induced by member and non-member contexts.
arXiv Detail & Related papers (2024-09-05T09:10:38Z) - Trust-Oriented Adaptive Guardrails for Large Language Models [9.719986610417441]
Guardrails are designed to ensure that large language models (LLMs) align with human values by moderating harmful or toxic responses.<n>This paper addresses a critical issue: existing guardrails lack a well-founded methodology to accommodate the diverse needs of different user groups.<n>We introduce an adaptive guardrail mechanism, to dynamically moderate access to sensitive content based on user trust metrics.
arXiv Detail & Related papers (2024-08-16T18:07:48Z) - Establishing Knowledge Preference in Language Models [80.70632813935644]
Language models are known to encode a great amount of factual knowledge through pretraining.
Such knowledge might be insufficient to cater to user requests.
When answering questions about ongoing events, the model should use recent news articles to update its response.
When some facts are edited in the model, the updated facts should override all prior knowledge learned by the model.
arXiv Detail & Related papers (2024-07-17T23:16:11Z) - Beyond One-Size-Fits-All: Adapting Counterfactual Explanations to User Objectives [2.3369294168789203]
Counterfactual Explanations (CFEs) offer insights into the decision-making processes of machine learning algorithms.
Existing literature often overlooks the diverse needs and objectives of users across different applications and domains.
We advocate for a nuanced understanding of CFEs, recognizing the variability in desired properties based on user objectives and target applications.
arXiv Detail & Related papers (2024-04-12T13:11:55Z) - RELIC: Investigating Large Language Model Responses using Self-Consistency [58.63436505595177]
Large Language Models (LLMs) are notorious for blending fact with fiction and generating non-factual content, known as hallucinations.
We propose an interactive system that helps users gain insight into the reliability of the generated text.
arXiv Detail & Related papers (2023-11-28T14:55:52Z) - Merging Generated and Retrieved Knowledge for Open-Domain QA [72.42262579925911]
COMBO is a compatibility-Oriented knowledge Merging for Better Open-domain QA framework.
We show that COMBO outperforms competitive baselines on three out of four tested open-domain QA benchmarks.
arXiv Detail & Related papers (2023-10-22T19:37:06Z) - A Question Answering Framework for Decontextualizing User-facing
Snippets from Scientific Documents [47.39561727838956]
We use language models to rewrite snippets from scientific documents to be read on their own.
We propose a framework that decomposes the task into three stages: question generation, question answering, and rewriting.
arXiv Detail & Related papers (2023-05-24T06:23:02Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.