Can we forget how we learned? Doxastic redundancy in iterated belief
revision
- URL: http://arxiv.org/abs/2402.15445v1
- Date: Fri, 23 Feb 2024 17:09:04 GMT
- Title: Can we forget how we learned? Doxastic redundancy in iterated belief
revision
- Authors: Paolo Liberatore
- Abstract summary: How information was acquired may become irrelevant.
Sometimes, a revision becomes redundant even in presence of none equal, or even no else implying it.
Shortening sequences of lexicographic revisions is shortening the most compact representations of iterated belief revision states.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How information was acquired may become irrelevant. An obvious case is when
something is confirmed many times. In terms of iterated belief revision, a
specific revision may become irrelevant in presence of others. Simple
repetitions are an example, but not the only case when this happens. Sometimes,
a revision becomes redundant even in presence of none equal, or even no else
implying it. A necessary and sufficient condition for the redundancy of the
first of a sequence of lexicographic revisions is given. The problem is
coNP-complete even with two propositional revisions only. Complexity is the
same in the Horn case but only with an unbounded number of revisions: it
becomes polynomial with two revisions. Lexicographic revisions are not only
relevant by themselves, but also because sequences of them are the most compact
of the common mechanisms used to represent the state of an iterated revision
process. Shortening sequences of lexicographic revisions is shortening the most
compact representations of iterated belief revision states.
Related papers
- On the redundancy of short and heterogeneous sequences of belief revisions [0.0]
Forgetting a specific belief revision episode may not erase information because the other revisions may provide the same information or allow to deduce it.
Whether it does was proved coNP-hard for sequence of two arbitrary lexicographic revision or arbitrarily long lexicographic revision.
arXiv Detail & Related papers (2025-04-18T10:12:04Z) - Causal Layering via Conditional Entropy [85.01590667411956]
Causal discovery aims to recover information about an unobserved causal graph from the observable data it generates.
We provide ways to recover layerings of a graph by accessing the data via a conditional entropy oracle.
arXiv Detail & Related papers (2024-01-19T05:18:28Z) - SCREWS: A Modular Framework for Reasoning with Revisions [58.698199183147935]
We present SCREWS, a modular framework for reasoning with revisions.
We show that SCREWS unifies several previous approaches under a common framework.
We evaluate our framework with state-of-the-art LLMs on a diverse set of reasoning tasks.
arXiv Detail & Related papers (2023-09-20T15:59:54Z) - Extending Path-Dependent NJ-ODEs to Noisy Observations and a Dependent
Observation Framework [6.404122934568861]
We introduce a new loss function, which allows us to deal with noisy observations and explain why the previously used loss function did not lead to a consistent estimator.
arXiv Detail & Related papers (2023-07-24T22:01:22Z) - Representing states in iterated belief revision [0.0]
Iterated belief revision requires information about the current beliefs.
Most literature concentrates on how to revise a doxastic state and neglects that it may exponentially grow.
This problem is studied for the most common ways of storing a doxastic state.
arXiv Detail & Related papers (2023-05-16T06:16:23Z) - On the Complexity of Representation Learning in Contextual Linear
Bandits [110.84649234726442]
We show that representation learning is fundamentally more complex than linear bandits.
In particular, learning with a given set of representations is never simpler than learning with the worst realizable representation in the set.
arXiv Detail & Related papers (2022-12-19T13:08:58Z) - What's the Harm? Sharp Bounds on the Fraction Negatively Affected by
Treatment [58.442274475425144]
We develop a robust inference algorithm that is efficient almost regardless of how and how fast these functions are learned.
We demonstrate our method in simulation studies and in a case study of career counseling for the unemployed.
arXiv Detail & Related papers (2022-05-20T17:36:33Z) - Learning to Revise References for Faithful Summarization [10.795263196202159]
We propose a new approach to improve reference quality while retaining all data.
We construct synthetic unsupported alternatives to supported sentences and use contrastive learning to discourage/encourage (un)faithful revisions.
We extract a small corpus from a noisy source--the Electronic Health Record (EHR)--for the task of summarizing a hospital admission from multiple notes.
arXiv Detail & Related papers (2022-04-13T18:54:19Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z) - On Mixed Iterated Revisions [0.2538209532048866]
A sequence of changes may involve several of them: for example, the first step is a revision, the second a contraction and the third a refinement of the previous beliefs.
The ten operators considered in this article are shown to be all reducible to three: lexicographic revision, refinement and severe withdrawal.
Most of them require only a number of calls to a satisfiability checker, some are even easier.
arXiv Detail & Related papers (2021-04-08T07:34:56Z) - A Theoretical Analysis of the Repetition Problem in Text Generation [55.8184629429347]
We show that the repetition problem is, unfortunately, caused by the traits of our language itself.
One major reason is attributed to the fact that there exist too many words predicting the same word as the subsequent word with high probability.
We propose a novel rebalanced encoding approach to alleviate the high inflow problem.
arXiv Detail & Related papers (2020-12-29T08:51:47Z) - A Weaker Faithfulness Assumption based on Triple Interactions [89.59955143854556]
We propose a weaker assumption that we call $2$-adjacency faithfulness.
We propose a sound orientation rule for causal discovery that applies under weaker assumptions.
arXiv Detail & Related papers (2020-10-27T13:04:08Z) - When Hearst Is not Enough: Improving Hypernymy Detection from Corpus
with Distributional Models [59.46552488974247]
This paper addresses whether an is-a relationship exists between words (x, y) with the help of large textual corpora.
Recent studies suggest that pattern-based ones are superior, if large-scale Hearst pairs are extracted and fed, with the sparsity of unseen (x, y) pairs relieved.
For the first time, this paper quantifies the non-negligible existence of those specific cases. We also demonstrate that distributional methods are ideal to make up for pattern-based ones in such cases.
arXiv Detail & Related papers (2020-10-10T08:34:19Z) - Revision by Conditionals: From Hook to Arrow [2.9005223064604078]
We introduce a 'plug and play' method for extending any iterated belief revision operator to the conditional case.
The flexibility of our approach is achieved by having the result of a conditional revision determined by that of a plain revision by its corresponding material conditional.
arXiv Detail & Related papers (2020-06-29T05:12:30Z) - Optimal Change-Point Detection with Training Sequences in the Large and
Moderate Deviations Regimes [72.68201611113673]
This paper investigates a novel offline change-point detection problem from an information-theoretic perspective.
We assume that the knowledge of the underlying pre- and post-change distributions are not known and can only be learned from the training sequences which are available.
arXiv Detail & Related papers (2020-03-13T23:39:40Z) - Consistency of a Recurrent Language Model With Respect to Incomplete
Decoding [67.54760086239514]
We study the issue of receiving infinite-length sequences from a recurrent language model.
We propose two remedies which address inconsistency: consistent variants of top-k and nucleus sampling, and a self-terminating recurrent language model.
arXiv Detail & Related papers (2020-02-06T19:56:15Z) - Fact-aware Sentence Split and Rephrase with Permutation Invariant
Training [93.66323661321113]
Sentence Split and Rephrase aims to break down a complex sentence into several simple sentences with its meaning preserved.
Previous studies tend to address the issue by seq2seq learning from parallel sentence pairs.
We introduce Permutation Training to verifies the effects of order variance in seq2seq learning for this task.
arXiv Detail & Related papers (2020-01-16T07:30:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.