Paraconsistent Foundations for Probabilistic Reasoning, Programming and
Concept Formation
- URL: http://arxiv.org/abs/2012.14474v2
- Date: Thu, 14 Jan 2021 17:51:17 GMT
- Title: Paraconsistent Foundations for Probabilistic Reasoning, Programming and
Concept Formation
- Authors: Ben Goertzel
- Abstract summary: It is argued that 4-valued paraconsistent truth values (called here "p-bits") can serve as a conceptual, mathematical and practical foundation for highly AI-relevant forms of probabilistic logic and programming and concept formation.
It is shown that appropriate averaging-across-situations and renormalization of 4-valued p-bits operating in accordance with Constructible Duality (CD) logic yields PLN (Probabilistic Logic Networks) strength-and-confidence truth values.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is argued that 4-valued paraconsistent truth values (called here "p-bits")
can serve as a conceptual, mathematical and practical foundation for highly
AI-relevant forms of probabilistic logic and probabilistic programming and
concept formation.
First it is shown that appropriate averaging-across-situations and
renormalization of 4-valued p-bits operating in accordance with Constructible
Duality (CD) logic yields PLN (Probabilistic Logic Networks)
strength-and-confidence truth values. Then variations on the Curry-Howard
correspondence are used to map these paraconsistent and probabilistic logics
into probabilistic types suitable for use within dependent type based
programming languages.
Zach Weber's paraconsistent analysis of the sorites paradox is extended to
form a paraconsistent / probabilistic / fuzzy analysis of concept boundaries;
and a paraconsistent version of concept formation via Formal Concept Analysis
is presented, building on a definition of fuzzy property-value degrees in terms
of relative entropy on paraconsistent probability distributions.
These general points are fleshed out via reference to the realization of
probabilistic reasoning and programming and concept formation in the OpenCog
AGI framework which is centered on collaborative multi-algorithm updating of a
common knowledge metagraph.
Related papers
- The Foundations of Tokenization: Statistical and Computational Concerns [51.370165245628975]
Tokenization is a critical step in the NLP pipeline.
Despite its recognized importance as a standard representation method in NLP, the theoretical underpinnings of tokenization are not yet fully understood.
The present paper contributes to addressing this theoretical gap by proposing a unified formal framework for representing and analyzing tokenizer models.
arXiv Detail & Related papers (2024-07-16T11:12:28Z) - Rigorous Probabilistic Guarantees for Robust Counterfactual Explanations [80.86128012438834]
We show for the first time that computing the robustness of counterfactuals with respect to plausible model shifts is NP-complete.
We propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees.
arXiv Detail & Related papers (2024-07-10T09:13:11Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - dPASP: A Comprehensive Differentiable Probabilistic Answer Set
Programming Environment For Neurosymbolic Learning and Reasoning [0.0]
We present dPASP, a novel declarative logic programming framework for differentiable neuro-symbolic reasoning.
We discuss the several semantics for probabilistic logic programs that can express nondeterministic, contradictory, incomplete and/or statistical knowledge.
We then describe an implemented package that supports inference and learning in the language, along with several example programs.
arXiv Detail & Related papers (2023-08-05T19:36:58Z) - Probabilistic unifying relations for modelling epistemic and aleatoric uncertainty: semantics and automated reasoning with theorem proving [0.3441021278275805]
Probabilistic programming combines general computer programming, statistical inference, and formal semantics.
ProbURel is based on Hehner's predicative probabilistic programming, but there are several obstacles to the broader adoption of his work.
Our contributions include the formalisation of relations using Unifying Theories of Programming (UTP) and probabilities outside the brackets.
We demonstrate our work with six examples, including problems in robot localisation, classification in machine learning, and the termination of probabilistic loops.
arXiv Detail & Related papers (2023-03-16T23:36:57Z) - Checking Trustworthiness of Probabilistic Computations in a Typed Natural Deduction System [0.0]
Derivability in TPTND is interpreted as the process of extracting $n$ samples with a certain frequency from a given categorical distribution.
We present a computational semantics for the terms over which we reason and then the semantics of TPTND.
We illustrate structural and metatheoretical properties, with particular focus on the ability to establish under which term evolutions and logical rules applications the notion of trustworhtiness can be preserved.
arXiv Detail & Related papers (2022-06-26T17:55:32Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - Entropy-based Logic Explanations of Neural Networks [24.43410365335306]
We propose an end-to-end differentiable approach for extracting logic explanations from neural networks.
The method relies on an entropy-based criterion which automatically identifies the most relevant concepts.
We consider four different case studies to demonstrate that: (i) this entropy-based criterion enables the distillation of concise logic explanations in safety-critical domains from clinical data to computer vision; (ii) the proposed approach outperforms state-of-the-art white-box models in terms of classification accuracy.
arXiv Detail & Related papers (2021-06-12T15:50:47Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.