Can we forget how we learned? Doxastic redundancy in iterated belief
revision
- URL: http://arxiv.org/abs/2402.15445v1
- Date: Fri, 23 Feb 2024 17:09:04 GMT
- Title: Can we forget how we learned? Doxastic redundancy in iterated belief
revision
- Authors: Paolo Liberatore
- Abstract summary: How information was acquired may become irrelevant.
Sometimes, a revision becomes redundant even in presence of none equal, or even no else implying it.
Shortening sequences of lexicographic revisions is shortening the most compact representations of iterated belief revision states.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How information was acquired may become irrelevant. An obvious case is when
something is confirmed many times. In terms of iterated belief revision, a
specific revision may become irrelevant in presence of others. Simple
repetitions are an example, but not the only case when this happens. Sometimes,
a revision becomes redundant even in presence of none equal, or even no else
implying it. A necessary and sufficient condition for the redundancy of the
first of a sequence of lexicographic revisions is given. The problem is
coNP-complete even with two propositional revisions only. Complexity is the
same in the Horn case but only with an unbounded number of revisions: it
becomes polynomial with two revisions. Lexicographic revisions are not only
relevant by themselves, but also because sequences of them are the most compact
of the common mechanisms used to represent the state of an iterated revision
process. Shortening sequences of lexicographic revisions is shortening the most
compact representations of iterated belief revision states.
Related papers
- SCREWS: A Modular Framework for Reasoning with Revisions [58.698199183147935]
We present SCREWS, a modular framework for reasoning with revisions.
We show that SCREWS unifies several previous approaches under a common framework.
We evaluate our framework with state-of-the-art LLMs on a diverse set of reasoning tasks.
arXiv Detail & Related papers (2023-09-20T15:59:54Z) - Representing states in iterated belief revision [0.0]
Iterated belief revision requires information about the current beliefs.
Most literature concentrates on how to revise a doxastic state and neglects that it may exponentially grow.
This problem is studied for the most common ways of storing a doxastic state.
arXiv Detail & Related papers (2023-05-16T06:16:23Z) - On the Complexity of Representation Learning in Contextual Linear
Bandits [110.84649234726442]
We show that representation learning is fundamentally more complex than linear bandits.
In particular, learning with a given set of representations is never simpler than learning with the worst realizable representation in the set.
arXiv Detail & Related papers (2022-12-19T13:08:58Z) - Learning to Revise References for Faithful Summarization [10.795263196202159]
We propose a new approach to improve reference quality while retaining all data.
We construct synthetic unsupported alternatives to supported sentences and use contrastive learning to discourage/encourage (un)faithful revisions.
We extract a small corpus from a noisy source--the Electronic Health Record (EHR)--for the task of summarizing a hospital admission from multiple notes.
arXiv Detail & Related papers (2022-04-13T18:54:19Z) - On Mixed Iterated Revisions [0.2538209532048866]
A sequence of changes may involve several of them: for example, the first step is a revision, the second a contraction and the third a refinement of the previous beliefs.
The ten operators considered in this article are shown to be all reducible to three: lexicographic revision, refinement and severe withdrawal.
Most of them require only a number of calls to a satisfiability checker, some are even easier.
arXiv Detail & Related papers (2021-04-08T07:34:56Z) - A Theoretical Analysis of the Repetition Problem in Text Generation [55.8184629429347]
We show that the repetition problem is, unfortunately, caused by the traits of our language itself.
One major reason is attributed to the fact that there exist too many words predicting the same word as the subsequent word with high probability.
We propose a novel rebalanced encoding approach to alleviate the high inflow problem.
arXiv Detail & Related papers (2020-12-29T08:51:47Z) - When Hearst Is not Enough: Improving Hypernymy Detection from Corpus
with Distributional Models [59.46552488974247]
This paper addresses whether an is-a relationship exists between words (x, y) with the help of large textual corpora.
Recent studies suggest that pattern-based ones are superior, if large-scale Hearst pairs are extracted and fed, with the sparsity of unseen (x, y) pairs relieved.
For the first time, this paper quantifies the non-negligible existence of those specific cases. We also demonstrate that distributional methods are ideal to make up for pattern-based ones in such cases.
arXiv Detail & Related papers (2020-10-10T08:34:19Z) - Revision by Conditionals: From Hook to Arrow [2.9005223064604078]
We introduce a 'plug and play' method for extending any iterated belief revision operator to the conditional case.
The flexibility of our approach is achieved by having the result of a conditional revision determined by that of a plain revision by its corresponding material conditional.
arXiv Detail & Related papers (2020-06-29T05:12:30Z) - Consistency of a Recurrent Language Model With Respect to Incomplete
Decoding [67.54760086239514]
We study the issue of receiving infinite-length sequences from a recurrent language model.
We propose two remedies which address inconsistency: consistent variants of top-k and nucleus sampling, and a self-terminating recurrent language model.
arXiv Detail & Related papers (2020-02-06T19:56:15Z) - Fact-aware Sentence Split and Rephrase with Permutation Invariant
Training [93.66323661321113]
Sentence Split and Rephrase aims to break down a complex sentence into several simple sentences with its meaning preserved.
Previous studies tend to address the issue by seq2seq learning from parallel sentence pairs.
We introduce Permutation Training to verifies the effects of order variance in seq2seq learning for this task.
arXiv Detail & Related papers (2020-01-16T07:30:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.