On the redundancy of short and heterogeneous sequences of belief revisions
- URL: http://arxiv.org/abs/2504.13986v1
- Date: Fri, 18 Apr 2025 10:12:04 GMT
- Title: On the redundancy of short and heterogeneous sequences of belief revisions
- Authors: Paolo Liberatore,
- Abstract summary: Forgetting a specific belief revision episode may not erase information because the other revisions may provide the same information or allow to deduce it.<n>Whether it does was proved coNP-hard for sequence of two arbitrary lexicographic revision or arbitrarily long lexicographic revision.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Forgetting a specific belief revision episode may not erase information because the other revisions may provide the same information or allow to deduce it. Whether it does was proved coNP-hard for sequence of two arbitrary lexicographic revision or arbitrarily long lexicographic Horn revision. A polynomial algorithm is presented for the case of two Horn revision. Heterogeneous sequences of revisions were proved to belong in Delta2. Their previously proved coNP-hardness is enhanced by a proof of NP-hardness.
Related papers
- Can we forget how we learned? Doxastic redundancy in iterated belief
revision [0.0]
How information was acquired may become irrelevant.
Sometimes, a revision becomes redundant even in presence of none equal, or even no else implying it.
Shortening sequences of lexicographic revisions is shortening the most compact representations of iterated belief revision states.
arXiv Detail & Related papers (2024-02-23T17:09:04Z) - Representing states in iterated belief revision [0.0]
Iterated belief revision requires information about the current beliefs.
Most literature concentrates on how to revise a doxastic state and neglects that it may exponentially grow.
This problem is studied for the most common ways of storing a doxastic state.
arXiv Detail & Related papers (2023-05-16T06:16:23Z) - DynGFN: Towards Bayesian Inference of Gene Regulatory Networks with
GFlowNets [81.75973217676986]
Gene regulatory networks (GRN) describe interactions between genes and their products that control gene expression and cellular function.
Existing methods either focus on challenge (1), identifying cyclic structure from dynamics, or on challenge (2) learning complex Bayesian posteriors over DAGs, but not both.
In this paper we leverage the fact that it is possible to estimate the "velocity" of gene expression with RNA velocity techniques to develop an approach that addresses both challenges.
arXiv Detail & Related papers (2023-02-08T16:36:40Z) - Belief Revision in Sentential Decision Diagrams [126.94029917018733]
We develop a general revision algorithm for SDDs based on a syntactic characterisation of Dalal revision.
Preliminary experiments performed with randomly generated knowledge bases show the advantages of directly perform revision within SDD formalism.
arXiv Detail & Related papers (2022-01-20T11:01:41Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z) - multiPRover: Generating Multiple Proofs for Improved Interpretability in
Rule Reasoning [73.09791959325204]
We focus on a type of linguistic formal reasoning where the goal is to reason over explicit knowledge in the form of natural language facts and rules.
A recent work, named PRover, performs such reasoning by answering a question and also generating a proof graph that explains the answer.
In our work, we address a new and challenging problem of generating multiple proof graphs for reasoning over natural language rule-bases.
arXiv Detail & Related papers (2021-06-02T17:58:35Z) - On Mixed Iterated Revisions [0.2538209532048866]
A sequence of changes may involve several of them: for example, the first step is a revision, the second a contraction and the third a refinement of the previous beliefs.
The ten operators considered in this article are shown to be all reducible to three: lexicographic revision, refinement and severe withdrawal.
Most of them require only a number of calls to a satisfiability checker, some are even easier.
arXiv Detail & Related papers (2021-04-08T07:34:56Z) - Permutation-Invariant Subgraph Discovery [16.380476734531513]
We introduce Permutation and Structured Perturbation Inference (PSPI)
PSPI is a new problem formulation that abstracts many graph matching tasks that arise in systems biology.
We propose an ADMM algorithm (STEPD) to solve a relaxed version of the PSPI problem.
arXiv Detail & Related papers (2021-04-02T14:28:21Z) - GroupifyVAE: from Group-based Definition to VAE-based Unsupervised
Representation Disentanglement [91.9003001845855]
VAE-based unsupervised disentanglement can not be achieved without introducing other inductive bias.
We address VAE-based unsupervised disentanglement by leveraging the constraints derived from the Group Theory based definition as the non-probabilistic inductive bias.
We train 1800 models covering the most prominent VAE-based models on five datasets to verify the effectiveness of our method.
arXiv Detail & Related papers (2021-02-20T09:49:51Z) - Revision by Conditionals: From Hook to Arrow [2.9005223064604078]
We introduce a 'plug and play' method for extending any iterated belief revision operator to the conditional case.
The flexibility of our approach is achieved by having the result of a conditional revision determined by that of a plain revision by its corresponding material conditional.
arXiv Detail & Related papers (2020-06-29T05:12:30Z) - Consistency of a Recurrent Language Model With Respect to Incomplete
Decoding [67.54760086239514]
We study the issue of receiving infinite-length sequences from a recurrent language model.
We propose two remedies which address inconsistency: consistent variants of top-k and nucleus sampling, and a self-terminating recurrent language model.
arXiv Detail & Related papers (2020-02-06T19:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.