Algorithmic Bayesian Epistemology
- URL: http://arxiv.org/abs/2403.07949v1
- Date: Mon, 11 Mar 2024 23:03:04 GMT
- Title: Algorithmic Bayesian Epistemology
- Authors: Eric Neyman
- Abstract summary: One aspect of the algorithmic lens in computer science is a view on other scientific disciplines that adhere to real-world constraints.
This thesis applies the algorithmic lens to the fields of molecular biology, ecology, neuroscience, and quantum physics.
- Score: 2.44755919161855
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One aspect of the algorithmic lens in theoretical computer science is a view
on other scientific disciplines that focuses on satisfactory solutions that
adhere to real-world constraints, as opposed to solutions that would be optimal
ignoring such constraints. The algorithmic lens has provided a unique and
important perspective on many academic fields, including molecular biology,
ecology, neuroscience, quantum physics, economics, and social science.
This thesis applies the algorithmic lens to Bayesian epistemology.
Traditional Bayesian epistemology provides a comprehensive framework for how an
individual's beliefs should evolve upon receiving new information. However,
these methods typically assume an exhaustive model of such information,
including the correlation structure between different pieces of evidence. In
reality, individuals might lack such an exhaustive model, while still needing
to form beliefs. Beyond such informational constraints, an individual may be
bounded by limited computation, or by limited communication with agents that
have access to information, or by the strategic behavior of such agents. Even
when these restrictions prevent the formation of a *perfectly* accurate belief,
arriving at a *reasonably* accurate belief remains crucial. In this thesis, we
establish fundamental possibility and impossibility results about belief
formation under a variety of restrictions, and lay the groundwork for further
exploration.
Related papers
- Algorithmic Idealism II: Reassessment of Competing Theories [0.0]
This paper explores the intersection of identity, individuality, and reality through competing frameworks.
Traditional metaphysical notions of fixed identity are challenged by advancements in cloning, teletransportation, and digital replication.
Computational approaches, such as the Ruliad and Constructor Theory, offer expansive views of emergent realities but often lack practical constraints for observer relevance.
Algorithm idealism is introduced as a unifying framework, proposing that reality is an emergent construct governed by computational rules.
arXiv Detail & Related papers (2024-12-16T19:52:29Z) - Algorithmic Idealism I: Reconceptualizing Reality Through Information and Experience [0.0]
Algorithmic idealism represents a transformative approach to understanding reality.
It emphasizes the informational structure of self-states and their algorithmic transitions.
It raises profound ethical questions regarding the continuity, duplication, and termination of informational entities.
arXiv Detail & Related papers (2024-12-16T17:33:43Z) - A simplicity bubble problem and zemblanity in digitally intermediated societies [1.4380443010065829]
We discuss the ubiquity of Big Data and machine learning in society.
We show that there is a ceiling above which formal knowledge cannot further decrease the probability of zemblanitous findings.
arXiv Detail & Related papers (2023-04-21T00:02:15Z) - Causal Deep Learning [77.49632479298745]
Causality has the potential to transform the way we solve real-world problems.
But causality often requires crucial assumptions which cannot be tested in practice.
We propose a new way of thinking about causality -- we call this causal deep learning.
arXiv Detail & Related papers (2023-03-03T19:19:18Z) - On Binding Objects to Symbols: Learning Physical Concepts to Understand
Real from Fake [155.6741526791004]
We revisit the classic signal-to-symbol barrier in light of the remarkable ability of deep neural networks to generate synthetic data.
We characterize physical objects as abstract concepts and use the previous analysis to show that physical objects can be encoded by finite architectures.
We conclude that binding physical entities to digital identities is possible in finite time with finite resources.
arXiv Detail & Related papers (2022-07-25T17:21:59Z) - On Heuristic Models, Assumptions, and Parameters [0.76146285961466]
We argue that the social effects of computing can depend just as much on obscure technical caveats, choices, and qualifiers.
We describe three classes of objects used to encode these choices and qualifiers: models, assumptions, and parameters.
We raise six reasons these objects may be hazardous to comprehensive analysis of computing and argue they deserve deliberate consideration as researchers explain scientific work.
arXiv Detail & Related papers (2022-01-19T04:32:11Z) - Quantum realism: axiomatization and quantification [77.34726150561087]
We build an axiomatization for quantum realism -- a notion of realism compatible with quantum theory.
We explicitly construct some classes of entropic quantifiers that are shown to satisfy almost all of the proposed axioms.
arXiv Detail & Related papers (2021-10-10T18:08:42Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Constrained Learning with Non-Convex Losses [119.8736858597118]
Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
arXiv Detail & Related papers (2021-03-08T23:10:33Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.