On Mixed Iterated Revisions
- URL: http://arxiv.org/abs/2104.03571v1
- Date: Thu, 8 Apr 2021 07:34:56 GMT
- Title: On Mixed Iterated Revisions
- Authors: Paolo Liberatore
- Abstract summary: A sequence of changes may involve several of them: for example, the first step is a revision, the second a contraction and the third a refinement of the previous beliefs.
The ten operators considered in this article are shown to be all reducible to three: lexicographic revision, refinement and severe withdrawal.
Most of them require only a number of calls to a satisfiability checker, some are even easier.
- Score: 0.2538209532048866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several forms of iterable belief change exist, differing in the kind of
change and its strength: some operators introduce formulae, others remove them;
some add formulae unconditionally, others only as additions to the previous
beliefs; some only relative to the current situation, others in all possible
cases. A sequence of changes may involve several of them: for example, the
first step is a revision, the second a contraction and the third a refinement
of the previous beliefs. The ten operators considered in this article are shown
to be all reducible to three: lexicographic revision, refinement and severe
withdrawal. In turn, these three can be expressed in terms of lexicographic
revision at the cost of restructuring the sequence. This restructuring needs
not to be done explicitly: an algorithm that works on the original sequence is
shown. The complexity of mixed sequences of belief change operators is also
analyzed. Most of them require only a polynomial number of calls to a
satisfiability checker, some are even easier.
Related papers
- Generating Counterfactual Explanations Using Cardinality Constraints [0.0]
We propose to explicitly add a cardinality constraint to counterfactual generation limiting how many features can be different from the original example.
This will provide more interpretable and easily understantable counterfactuals.
arXiv Detail & Related papers (2024-04-11T06:33:19Z) - Can we forget how we learned? Doxastic redundancy in iterated belief
revision [0.0]
How information was acquired may become irrelevant.
Sometimes, a revision becomes redundant even in presence of none equal, or even no else implying it.
Shortening sequences of lexicographic revisions is shortening the most compact representations of iterated belief revision states.
arXiv Detail & Related papers (2024-02-23T17:09:04Z) - SCREWS: A Modular Framework for Reasoning with Revisions [58.698199183147935]
We present SCREWS, a modular framework for reasoning with revisions.
We show that SCREWS unifies several previous approaches under a common framework.
We evaluate our framework with state-of-the-art LLMs on a diverse set of reasoning tasks.
arXiv Detail & Related papers (2023-09-20T15:59:54Z) - A Simplified Expression for Quantum Fidelity [0.0]
This work shows in a novel, elegant proof that the expression can be rewritten into a form.
The simpler expression gives rise to a formulation that is subsequently shown to be more computationally efficient than the best previous methods.
arXiv Detail & Related papers (2023-09-19T12:19:12Z) - Mutual Exclusivity Training and Primitive Augmentation to Induce
Compositionality [84.94877848357896]
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models.
We analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias and the tendency to memorize whole examples.
We show substantial empirical improvements using standard sequence-to-sequence models on two widely-used compositionality datasets.
arXiv Detail & Related papers (2022-11-28T17:36:41Z) - On Limited Non-Prioritised Belief Revision Operators with Dynamic Scope [2.7071541526963805]
We introduce the concept of dynamic-limited revision, which are revisions expressible by a total preorder over a limited set of worlds.
For a belief change operator, we consider the scope, which consists of those beliefs which yield success of revision.
We show that for each set satisfying single sentence closure and disjunction completeness there exists a dynamic-limited revision having the union of this set with the beliefs set as scope.
arXiv Detail & Related papers (2021-08-17T17:22:29Z) - Controllable Text Simplification with Explicit Paraphrasing [88.02804405275785]
Text Simplification improves the readability of sentences through several rewriting transformations, such as lexical paraphrasing, deletion, and splitting.
Current simplification systems are predominantly sequence-to-sequence models that are trained end-to-end to perform all these operations simultaneously.
We propose a novel hybrid approach that leverages linguistically-motivated rules for splitting and deletion, and couples them with a neural paraphrasing model to produce varied rewriting styles.
arXiv Detail & Related papers (2020-10-21T13:44:40Z) - Neural Syntactic Preordering for Controlled Paraphrase Generation [57.5316011554622]
Our work uses syntactic transformations to softly "reorder'' the source sentence and guide our neural paraphrasing model.
First, given an input sentence, we derive a set of feasible syntactic rearrangements using an encoder-decoder model.
Next, we use each proposed rearrangement to produce a sequence of position embeddings, which encourages our final encoder-decoder paraphrase model to attend to the source words in a particular order.
arXiv Detail & Related papers (2020-05-05T09:02:25Z) - ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification
Models with Multiple Rewriting Transformations [97.27005783856285]
This paper introduces ASSET, a new dataset for assessing sentence simplification in English.
We show that simplifications in ASSET are better at capturing characteristics of simplicity when compared to other standard evaluation datasets for the task.
arXiv Detail & Related papers (2020-05-01T16:44:54Z) - Fact-aware Sentence Split and Rephrase with Permutation Invariant
Training [93.66323661321113]
Sentence Split and Rephrase aims to break down a complex sentence into several simple sentences with its meaning preserved.
Previous studies tend to address the issue by seq2seq learning from parallel sentence pairs.
We introduce Permutation Training to verifies the effects of order variance in seq2seq learning for this task.
arXiv Detail & Related papers (2020-01-16T07:30:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.