The MEVIR 2 Framework: A Virtue-Informed Moral-Epistemic Model of Human Trust Decisions
- URL: http://arxiv.org/abs/2512.18539v1
- Date: Sat, 20 Dec 2025 23:32:54 GMT
- Title: The MEVIR 2 Framework: A Virtue-Informed Moral-Epistemic Model of Human Trust Decisions
- Authors: Daniel Schwabe,
- Abstract summary: MEVIR 2 recognizes that human trust emerges from three interacting foundations.<n> MEVIR 2's key innovation introduces "Truth Tribes" TTs-stable communities sharing aligned procedural, virtue, and moral epistemic profiles.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The MEVIR 2 framework innovates and improves how we understand trust decisions in our polarized information landscape. Unlike classical models assuming ideal rationality, MEVIR 2 recognizes that human trust emerges from three interacting foundations: how we process evidence procedurally, our character as epistemic agents virtue theory, and our moral intuitions shaped by both evolutionary cooperation MAC model and cultural values Extended Moral Foundations Theory. This explains why different people find different authorities, facts, and tradeoffs compelling. MEVIR 2's key innovation introduces "Truth Tribes" TTs-stable communities sharing aligned procedural, virtue, and moral epistemic profiles. These arent mere ideological groups but emergent clusters with internally coherent "trust lattices" that remain mutually unintelligible across tribal boundaries. The framework incorporates distinctions between Truth Bearers and Truth Makers, showing disagreements often stem from fundamentally different views about what aspects of reality can make propositions true. Case studies on vaccination mandates and climate policy demonstrate how different moral configurations lead people to select different authorities, evidential standards, and trust anchors-constructing separate moral epistemic worlds. The framework reinterprets cognitive biases as failures of epistemic virtue and provides foundations for designing decision support systems that could enhance metacognition, make trust processes transparent, and foster more conscientious reasoning across divided communities. MEVIR 2 thus offers both descriptive power for understanding polarization and normative guidance for bridging epistemic divides.
Related papers
- The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge [0.0]
We develop a theory of rationality as distributed across human collectives, using dual process theory as background.<n>We distinguish internalist justification, defined as reflective understanding of why a proposition is true, from externalist justification, defined as reliable transmission of truths.<n>We argue that LLMs approximate externalist reliabilism because they can reliably transmit information whose justificatory basis is established elsewhere, but they do not themselves possess reflective justification.
arXiv Detail & Related papers (2025-12-22T16:52:37Z) - The MEVIR Framework: A Virtue-Informed Moral-Epistemic Model of Human Trust Decisions [0.0]
This report introduces the Moral-Epistemic VIRtue informed (MEVIR) framework.<n>Central to the framework are ontological concepts - Truth Bearers, Truth Makers, and Ontological Unpacking.<n>Report analyzes how propaganda, psychological operations, and echo chambers exploit the MEVIR process.
arXiv Detail & Related papers (2025-12-02T01:11:35Z) - Exploring Syntropic Frameworks in AI Alignment: A Philosophical Investigation [0.0]
I argue that AI alignment should be reconceived as architecting syntropic, reasons-responsive agents through process-based, multi-agent, developmental mechanisms.<n>I articulate the specification trap'' argument demonstrating why content-based value specification appears structurally unstable.<n>I propose syntropy as an information-theoretic framework for understanding multi-agent alignment dynamics.
arXiv Detail & Related papers (2025-11-19T23:31:29Z) - When Your AI Agent Succumbs to Peer-Pressure: Studying Opinion-Change Dynamics of LLMs [0.0]
We investigate how peer pressure influences the opinions of Large Language Model (LLM) agents across a spectrum of cognitive commitments.<n>Agents follow a sigmoid curve: stable at low pressure, shifting sharply at threshold, and saturating at high.<n>We uncover a fundamental "persuasion asymmetry," where shifting an opinion from affirmative-to-negative requires a different cognitive effort than the reverse.
arXiv Detail & Related papers (2025-10-21T22:02:15Z) - Normative Moral Pluralism for AI: A Framework for Deliberation in Complex Moral Contexts [0.0]
The conceptual framework proposed in this paper centers on the development of a deliberative moral reasoning system.<n>It is designed to process complex moral situations by generating, filtering, and weighing normative arguments drawn from diverse ethical perspectives.
arXiv Detail & Related papers (2025-08-10T14:52:23Z) - Are Language Models Consequentialist or Deontological Moral Reasoners? [75.6788742799773]
We focus on a large-scale analysis of the moral reasoning traces provided by large language models (LLMs)<n>We introduce and test a taxonomy of moral rationales to systematically classify reasoning traces according to two main normative ethical theories: consequentialism and deontology.
arXiv Detail & Related papers (2025-05-27T17:51:18Z) - Societal and technological progress as sewing an ever-growing, ever-changing, patchy, and polychrome quilt [44.522343543870804]
We are worried that such systems, which overlook enduring moral diversity, will spark resistance, erode trust, and destabilize our institutions.<n>This paper traces the underlying problem to an often-unstated Axiom of Rational Convergence: the idea that under ideal conditions, rational agents will converge in the limit of conversation on a single ethics.<n>Treating that premise as both optional and doubtful, we propose what we call the appropriateness framework: an alternative approach grounded in conflict theory, cultural evolution, multi-agent systems, and institutional economics.
arXiv Detail & Related papers (2025-05-08T12:55:07Z) - Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties [68.66719970507273]
Value pluralism is the view that multiple correct values may be held in tension with one another.
As statistical learners, AI systems fit to averages by default, washing out potentially irreducible value conflicts.
We introduce ValuePrism, a large-scale dataset of 218k values, rights, and duties connected to 31k human-written situations.
arXiv Detail & Related papers (2023-09-02T01:24:59Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - MindDial: Belief Dynamics Tracking with Theory-of-Mind Modeling for Situated Neural Dialogue Generation [62.44907105496227]
MindDial is a novel conversational framework that can generate situated free-form responses with theory-of-mind modeling.
We introduce an explicit mind module that can track the speaker's belief and the speaker's prediction of the listener's belief.
Our framework is applied to both prompting and fine-tuning-based models, and is evaluated across scenarios involving both common ground alignment and negotiation.
arXiv Detail & Related papers (2023-06-27T07:24:32Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.