Generative Logic with Time: Beyond Logical Consistency and Statistical
Possibility
- URL: http://arxiv.org/abs/2301.08509v1
- Date: Fri, 20 Jan 2023 10:55:49 GMT
- Title: Generative Logic with Time: Beyond Logical Consistency and Statistical
Possibility
- Authors: Hiroyuki Kido
- Abstract summary: We propose a temporal probabilistic model that generates symbolic knowledge from data.
The correctness of the model is justified in terms of consistency with Kolmogorov's axioms, Fenstad's theorems and maximum likelihood estimation.
- Score: 0.6853165736531939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper gives a theory of inference to logically reason symbolic knowledge
fully from data over time. We propose a temporal probabilistic model that
generates symbolic knowledge from data. The statistical correctness of the
model is justified in terms of consistency with Kolmogorov's axioms, Fenstad's
theorems and maximum likelihood estimation. The logical correctness of the
model is justified in terms of logical consequence relations on propositional
logic and its extension. We show that the theory is applicable to localisation
problems.
Related papers
- Uncertainty Quantification in the Tsetlin Machine [11.592828269085082]
We develop new techniques for uncertainty quantification to increase the explainability further.<n>The probability score is an inherent property of any TM variant and is derived through an analysis of the TM learning dynamics.<n>A visualization of the probability scores also reveals that the TM is less confident in its predictions outside the training data domain.
arXiv Detail & Related papers (2025-07-05T22:06:46Z) - Inference of Abstraction for Grounded Predicate Logic [0.0]
An important open question in AI is what simple and natural principle enables a machine to reason logically for meaningful abstraction with grounded symbols.
This paper explores a conceptually new approach to combining probabilistic reasoning and predicative symbolic reasoning over data.
arXiv Detail & Related papers (2025-02-19T14:07:34Z) - Towards Privacy-Preserving Relational Data Synthesis via Probabilistic Relational Models [3.877001015064152]
Probabilistic relational models provide a well-established formalism to combine first-order logic and probabilistic models.
The field of artificial intelligence requires increasingly large amounts of relational training data for various machine learning tasks.
Collecting real-world data is often challenging due to privacy concerns, data protection regulations, high costs, and so on.
arXiv Detail & Related papers (2024-09-06T11:24:25Z) - Estimating Causal Effects from Learned Causal Networks [56.14597641617531]
We propose an alternative paradigm for answering causal-effect queries over discrete observable variables.
We learn the causal Bayesian network and its confounding latent variables directly from the observational data.
We show that this emphmodel completion learning approach can be more effective than estimand approaches.
arXiv Detail & Related papers (2024-08-26T08:39:09Z) - Exchangeable Sequence Models Quantify Uncertainty Over Latent Concepts [6.256239986541708]
We show that pre-trained sequence models are naturally capable of probabilistic reasoning over exchangeable data points.<n>A sequence model learns the relationship between observations, which differs from typical Bayesian models.<n>We show the sequence prediction loss controls the quality of uncertainty quantification.
arXiv Detail & Related papers (2024-08-06T17:16:10Z) - On the Efficient Marginalization of Probabilistic Sequence Models [3.5897534810405403]
This dissertation focuses on using autoregressive models to answer complex probabilistic queries.
We develop a class of novel and efficient approximation techniques for marginalization in sequential models that are model-agnostic.
arXiv Detail & Related papers (2024-03-06T19:29:08Z) - Inference of Abstraction for a Unified Account of Reasoning and Learning [0.0]
We give a simple theory of probabilistic inference for a unified account of reasoning and learning.
We simply model how data cause symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2024-02-14T09:43:35Z) - Inference of Abstraction for a Unified Account of Symbolic Reasoning
from Data [0.0]
We give a unified probabilistic account of various types of symbolic reasoning from data.
The theory gives new insights into reasoning towards human-like machine intelligence.
arXiv Detail & Related papers (2024-02-13T18:24:23Z) - User-defined Event Sampling and Uncertainty Quantification in Diffusion
Models for Physical Dynamical Systems [49.75149094527068]
We show that diffusion models can be adapted to make predictions and provide uncertainty quantification for chaotic dynamical systems.
We develop a probabilistic approximation scheme for the conditional score function which converges to the true distribution as the noise level decreases.
We are able to sample conditionally on nonlinear userdefined events at inference time, and matches data statistics even when sampling from the tails of the distribution.
arXiv Detail & Related papers (2023-06-13T03:42:03Z) - A Simple Generative Model of Logical Reasoning and Statistical Learning [0.6853165736531939]
Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence.
We here propose a simple Bayesian model of logical reasoning and statistical learning.
We simply model how data causes symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2023-05-18T16:34:51Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - On the Paradox of Learning to Reason from Data [86.13662838603761]
We show that BERT can attain near-perfect accuracy on in-distribution test examples while failing to generalize to other data distributions over the exact same problem space.
Our study provides an explanation for this paradox: instead of learning to emulate the correct reasoning function, BERT has in fact learned statistical features that inherently exist in logical reasoning problems.
arXiv Detail & Related papers (2022-05-23T17:56:48Z) - Towards Unifying Logical Entailment and Statistical Estimation [0.6853165736531939]
This paper gives a generative model of the interpretation of formal logic for data-driven logical reasoning.
It is shown that the generative model is a unified theory of several different types of reasoning in logic and statistics.
arXiv Detail & Related papers (2022-02-27T17:51:35Z) - Masked prediction tasks: a parameter identifiability view [49.533046139235466]
We focus on the widely used self-supervised learning method of predicting masked tokens.
We show that there is a rich landscape of possibilities, out of which some prediction tasks yield identifiability, while others do not.
arXiv Detail & Related papers (2022-02-18T17:09:32Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - Typing assumptions improve identification in causal discovery [123.06886784834471]
Causal discovery from observational data is a challenging task to which an exact solution cannot always be identified.
We propose a new set of assumptions that constrain possible causal relationships based on the nature of the variables.
arXiv Detail & Related papers (2021-07-22T14:23:08Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Learning Interpretable Deep State Space Model for Probabilistic Time
Series Forecasting [98.57851612518758]
Probabilistic time series forecasting involves estimating the distribution of future based on its history.
We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks.
We show in experiments that our model produces accurate and sharp probabilistic forecasts.
arXiv Detail & Related papers (2021-01-31T06:49:33Z) - Bayes Meets Entailment and Prediction: Commonsense Reasoning with
Non-monotonicity, Paraconsistency and Predictive Accuracy [2.7412662946127755]
We introduce a generative model of logical consequence relations.
It formalises the process of how the truth value of a sentence is probabilistically generated from the probability distribution over states of the world.
We show that the generative model gives a new classification algorithm that outperforms several representative algorithms in predictive accuracy and complexity on the Kaggle Titanic dataset.
arXiv Detail & Related papers (2020-12-15T18:22:27Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.