The MEVIR Framework: A Virtue-Informed Moral-Epistemic Model of Human Trust Decisions
- URL: http://arxiv.org/abs/2512.02310v1
- Date: Tue, 02 Dec 2025 01:11:35 GMT
- Title: The MEVIR Framework: A Virtue-Informed Moral-Epistemic Model of Human Trust Decisions
- Authors: Daniel Schwabe,
- Abstract summary: This report introduces the Moral-Epistemic VIRtue informed (MEVIR) framework.<n>Central to the framework are ontological concepts - Truth Bearers, Truth Makers, and Ontological Unpacking.<n>Report analyzes how propaganda, psychological operations, and echo chambers exploit the MEVIR process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The 21st-century information landscape presents an unprecedented challenge: how do individuals make sound trust decisions amid complexity, polarization, and misinformation? Traditional rational-agent models fail to capture human trust formation, which involves a complex synthesis of reason, character, and pre-rational intuition. This report introduces the Moral-Epistemic VIRtue informed (MEVIR) framework, a comprehensive descriptive model integrating three theoretical perspectives: (1) a procedural model describing evidence-gathering and reasoning chains; (2) Linda Zagzebski's virtue epistemology, characterizing intellectual disposition and character-driven processes; and (3) Extended Moral Foundations Theory (EMFT), explaining rapid, automatic moral intuitions that anchor reasoning. Central to the framework are ontological concepts - Truth Bearers, Truth Makers, and Ontological Unpacking-revealing that disagreements often stem from fundamental differences in what counts as admissible reality. MEVIR reframes cognitive biases as systematic failures in applying epistemic virtues and demonstrates how different moral foundations lead agents to construct separate, internally coherent "trust lattices". Through case studies on vaccination mandates and climate policy, the framework shows that political polarization represents deeper divergence in moral priors, epistemic authorities, and evaluative heuristics. The report analyzes how propaganda, psychological operations, and echo chambers exploit the MEVIR process. The framework provides foundation for a Decision Support System to augment metacognition, helping individuals identify biases and practice epistemic virtues. The report concludes by acknowledging limitations and proposing longitudinal studies for future research.
Related papers
- The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge [0.0]
We develop a theory of rationality as distributed across human collectives, using dual process theory as background.<n>We distinguish internalist justification, defined as reflective understanding of why a proposition is true, from externalist justification, defined as reliable transmission of truths.<n>We argue that LLMs approximate externalist reliabilism because they can reliably transmit information whose justificatory basis is established elsewhere, but they do not themselves possess reflective justification.
arXiv Detail & Related papers (2025-12-22T16:52:37Z) - The MEVIR 2 Framework: A Virtue-Informed Moral-Epistemic Model of Human Trust Decisions [0.0]
MEVIR 2 recognizes that human trust emerges from three interacting foundations.<n> MEVIR 2's key innovation introduces "Truth Tribes" TTs-stable communities sharing aligned procedural, virtue, and moral epistemic profiles.
arXiv Detail & Related papers (2025-12-20T23:32:54Z) - Exploring Syntropic Frameworks in AI Alignment: A Philosophical Investigation [0.0]
I argue that AI alignment should be reconceived as architecting syntropic, reasons-responsive agents through process-based, multi-agent, developmental mechanisms.<n>I articulate the specification trap'' argument demonstrating why content-based value specification appears structurally unstable.<n>I propose syntropy as an information-theoretic framework for understanding multi-agent alignment dynamics.
arXiv Detail & Related papers (2025-11-19T23:31:29Z) - LLMs as Strategic Agents: Beliefs, Best Response Behavior, and Emergent Heuristics [0.0]
Large Language Models (LLMs) are increasingly applied to domains that require reasoning about other agents' behavior.<n>We show that current frontier models exhibit belief-coherent best-response behavior at targeted reasoning memorization.<n>Under increasing complexity, explicit recursion gives way to internally generated rules of choice that are stable, model-specific, and distinct from known human biases.
arXiv Detail & Related papers (2025-10-12T21:40:29Z) - Disagreements in Reasoning: How a Model's Thinking Process Dictates Persuasion in Multi-Agent Systems [49.69773210844221]
This paper challenges the prevailing hypothesis that persuasive efficacy is primarily a function of model scale.<n>Through a series of multi-agent persuasion experiments, we uncover a fundamental trade-off we term the Persuasion Duality.<n>Our findings reveal that the reasoning process in LRMs exhibits significantly greater resistance to persuasion, maintaining their initial beliefs more robustly.
arXiv Detail & Related papers (2025-09-25T12:03:10Z) - Normative Moral Pluralism for AI: A Framework for Deliberation in Complex Moral Contexts [0.0]
The conceptual framework proposed in this paper centers on the development of a deliberative moral reasoning system.<n>It is designed to process complex moral situations by generating, filtering, and weighing normative arguments drawn from diverse ethical perspectives.
arXiv Detail & Related papers (2025-08-10T14:52:23Z) - "Pull or Not to Pull?'': Investigating Moral Biases in Leading Large Language Models Across Ethical Dilemmas [11.229443362516207]
This study presents a comprehensive empirical evaluation of 14 leading large language models (LLMs)<n>We elicited 3,780 binary decisions and natural language justifications, enabling analysis along axes of decisional assertiveness, explanation answer consistency, public moral alignment, and sensitivity to ethically irrelevant cues.<n>We advocate for moral reasoning to become a primary axis in LLM alignment, calling for standardized benchmarks that evaluate not just what LLMs decide, but how and why.
arXiv Detail & Related papers (2025-08-10T10:45:16Z) - Are Language Models Consequentialist or Deontological Moral Reasoners? [75.6788742799773]
We focus on a large-scale analysis of the moral reasoning traces provided by large language models (LLMs)<n>We introduce and test a taxonomy of moral rationales to systematically classify reasoning traces according to two main normative ethical theories: consequentialism and deontology.
arXiv Detail & Related papers (2025-05-27T17:51:18Z) - The Convergent Ethics of AI? Analyzing Moral Foundation Priorities in Large Language Models with a Multi-Framework Approach [6.0972634521845475]
This paper introduces the Priorities in Reasoning and Intrinsic Moral Evaluation (PRIME) framework.<n>PRIME is a comprehensive methodology for analyzing moral priorities across foundational ethical dimensions.<n>We apply this framework to six leading large language models (LLMs) through a dual-protocol approach.
arXiv Detail & Related papers (2025-04-27T14:26:48Z) - Analyzing the Ethical Logic of Six Large Language Models [1.119697400073873]
This study examines the ethical reasoning of six prominent generative large language models: OpenAI GPT-4o, Meta LLaMA 3.1, Perplexity, Anthropic Claude 3.5 Sonnet, Google Gemini, and Mistral 7B.<n>Findings reveal that LLMs exhibit largely convergent ethical logic, marked by a rationalist, consequentialist emphasis, with decisions often prioritizing harm and fairness.
arXiv Detail & Related papers (2025-01-15T16:56:26Z) - Failure Modes of LLMs for Causal Reasoning on Narratives [51.19592551510628]
We investigate the interaction between world knowledge and logical reasoning.<n>We find that state-of-the-art large language models (LLMs) often rely on superficial generalizations.<n>We show that simple reformulations of the task can elicit more robust reasoning behavior.
arXiv Detail & Related papers (2024-10-31T12:48:58Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge Reasoning via Promoting Causal Consistency in LLMs [55.66353783572259]
Causal-Consistency Chain-of-Thought harnesses multi-agent collaboration to bolster the faithfulness and causality of foundation models.<n>Our framework demonstrates significant superiority over state-of-the-art methods through extensive and comprehensive evaluations.
arXiv Detail & Related papers (2023-08-23T04:59:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.