From Axioms to Algorithms: Mechanized Proofs of the vNM Utility Theorem
- URL: http://arxiv.org/abs/2506.07066v1
- Date: Sun, 08 Jun 2025 10:09:54 GMT
- Title: From Axioms to Algorithms: Mechanized Proofs of the vNM Utility Theorem
- Authors: Li Jingyuan,
- Abstract summary: We implement the classical axioms of preference-completeness, transitivity, continuity, and independence.<n>Our formalization captures the mathematical structure of preference relations over lotteries.<n>This formalization provides a rigorous foundation for applications in economic modeling, AI alignment, and management decision systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a comprehensive formalization of the von Neumann-Morgenstern (vNM) expected utility theorem using the Lean 4 interactive theorem prover. We implement the classical axioms of preference-completeness, transitivity, continuity, and independence-enabling machine-verified proofs of both the existence and uniqueness of utility representations. Our formalization captures the mathematical structure of preference relations over lotteries, verifying that preferences satisfying the vNM axioms can be represented by expected utility maximization. Our contributions include a granular implementation of the independence axiom, formally verified proofs of fundamental claims about mixture lotteries, constructive demonstrations of utility existence, and computational experiments validating the results. We prove equivalence to classical presentations while offering greater precision at decision boundaries. This formalization provides a rigorous foundation for applications in economic modeling, AI alignment, and management decision systems, bridging the gap between theoretical decision theory and computational implementation.
Related papers
- Formal Power Series Representations in Probability and Expected Utility Theory [0.24578723416255752]
We advance a theory of coherent preference that surrenders restrictions embodied in orthodox doctrine.<n>Unlike de Finetti's theory, the one we set forth requires neither transitivity nor Archimedeanness nor boundedness nor continuity of preference.<n>Representability by utility is a corollary of this paper's central result, which at once extends H"older's Theorem and strengthens Hahn's Embedding Theorem.
arXiv Detail & Related papers (2025-08-01T03:34:39Z) - LeanConjecturer: Automatic Generation of Mathematical Conjectures for Theorem Proving [6.220998637943786]
We introduce LeanConjecturer, a pipeline for automatically generating university-level mathematical conjectures in Lean 4 using Large Language Models (LLMs)<n>Through iterative generation and evaluation, LeanConjecturer produced 12,289 conjectures from 40 Mathlib seed files, with 3,776 identified as syntactically valid and non-trivial.
arXiv Detail & Related papers (2025-06-27T08:17:18Z) - DeepTheorem: Advancing LLM Reasoning for Theorem Proving Through Natural Language and Reinforcement Learning [67.93945726549289]
DeepTheorem is a comprehensive informal theorem-proving framework exploiting natural language to enhance mathematical reasoning.<n>DeepTheorem includes a large-scale benchmark dataset consisting of 121K high-quality IMO-level informal theorems and proofs.<n>We devise a novel reinforcement learning strategy (RL-Zero) explicitly tailored to informal theorem proving, leveraging the verified theorem variants to incentivize robust mathematical inference.
arXiv Detail & Related papers (2025-05-29T17:59:39Z) - Theory: Multidimensional Space of Events [0.0]
This paper develops a multidimensional space of events (MDSE) theory that accounts for mutual influences between events and hypotheses sets.<n>MDSE successfully models interrelated variables with 15-20% improved prediction accuracy.<n>This approach offers practical applications in engineering challenges including risk assessment, resource optimization, and forecasting problems.
arXiv Detail & Related papers (2025-05-16T08:54:12Z) - Function-coherent gambles [0.0]
This paper introduces function-coherent gambles, a generalization that accommodates non-linear utility.<n>We prove a representation theorem that characterizes acceptable gambles through continuous linear functionals.<n>We demonstrate how these alternatives to constant-rate exponential discounting can be integrated within the function-coherent framework.
arXiv Detail & Related papers (2025-02-22T14:44:54Z) - Alchemy: Amplifying Theorem-Proving Capability through Symbolic Mutation [71.32761934724867]
This work proposes Alchemy, a framework for data synthesis that constructs formal theorems through symbolic mutation.<n>For each candidate theorem in Mathlib, we identify all invocable theorems that can be used to rewrite or apply to it.<n>As a result, our method increases the number of theorems in Mathlib by an order of magnitude, from 110k to 6M.
arXiv Detail & Related papers (2024-10-21T08:04:21Z) - The Foundations of Tokenization: Statistical and Computational Concerns [51.370165245628975]
Tokenization is a critical step in the NLP pipeline.<n>Despite its recognized importance as a standard representation method in NLP, the theoretical underpinnings of tokenization are not yet fully understood.<n>The present paper contributes to addressing this theoretical gap by proposing a unified formal framework for representing and analyzing tokenizer models.
arXiv Detail & Related papers (2024-07-16T11:12:28Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - The vacuum provides quantum advantage to otherwise simulatable
architectures [49.1574468325115]
We consider a computational model composed of ideal Gottesman-Kitaev-Preskill stabilizer states.
We provide an algorithm to calculate the probability density function of the measurement outcomes.
arXiv Detail & Related papers (2022-05-19T18:03:17Z) - A Topological Perspective on Causal Inference [10.965065178451104]
We show that substantive assumption-free causal inference is possible only in a meager set of structural causal models.
Our results show that inductive assumptions sufficient to license valid causal inferences are statistically unverifiable in principle.
An additional benefit of our topological approach is that it easily accommodates SCMs with infinitely many variables.
arXiv Detail & Related papers (2021-07-18T23:09:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.