Revision by Conditionals: From Hook to Arrow
- URL: http://arxiv.org/abs/2006.15811v1
- Date: Mon, 29 Jun 2020 05:12:30 GMT
- Title: Revision by Conditionals: From Hook to Arrow
- Authors: Jake Chandler, Richard Booth
- Abstract summary: We introduce a 'plug and play' method for extending any iterated belief revision operator to the conditional case.
The flexibility of our approach is achieved by having the result of a conditional revision determined by that of a plain revision by its corresponding material conditional.
- Score: 2.9005223064604078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The belief revision literature has largely focussed on the issue of how to
revise one's beliefs in the light of information regarding matters of fact.
Here we turn to an important but comparatively neglected issue: How might one
extend a revision operator to handle conditionals as input? Our approach to
this question of 'conditional revision' is distinctive insofar as it abstracts
from the controversial details of how to revise by factual sentences. We
introduce a 'plug and play' method for uniquely extending any iterated belief
revision operator to the conditional case. The flexibility of our approach is
achieved by having the result of a conditional revision by a Ramsey Test
conditional ('arrow') determined by that of a plain revision by its
corresponding material conditional ('hook'). It is shown to satisfy a number of
new constraints that are of independent interest.
Related papers
- FENICE: Factuality Evaluation of summarization based on Natural language Inference and Claim Extraction [85.26780391682894]
We propose Factuality Evaluation of summarization based on Natural language Inference and Claim Extraction (FENICE)
FENICE leverages an NLI-based alignment between information in the source document and a set of atomic facts, referred to as claims, extracted from the summary.
Our metric sets a new state of the art on AGGREFACT, the de-facto benchmark for factuality evaluation.
arXiv Detail & Related papers (2024-03-04T17:57:18Z) - Longitudinal Counterfactuals: Constraints and Opportunities [59.11233767208572]
We propose using longitudinal data to assess and improve plausibility in counterfactuals.
We develop a metric that compares longitudinal differences to counterfactual differences, allowing us to evaluate how similar a counterfactual is to prior observed changes.
arXiv Detail & Related papers (2024-02-29T20:17:08Z) - Can we forget how we learned? Doxastic redundancy in iterated belief
revision [0.0]
How information was acquired may become irrelevant.
Sometimes, a revision becomes redundant even in presence of none equal, or even no else implying it.
Shortening sequences of lexicographic revisions is shortening the most compact representations of iterated belief revision states.
arXiv Detail & Related papers (2024-02-23T17:09:04Z) - SCREWS: A Modular Framework for Reasoning with Revisions [58.698199183147935]
We present SCREWS, a modular framework for reasoning with revisions.
We show that SCREWS unifies several previous approaches under a common framework.
We evaluate our framework with state-of-the-art LLMs on a diverse set of reasoning tasks.
arXiv Detail & Related papers (2023-09-20T15:59:54Z) - To Revise or Not to Revise: Learning to Detect Improvable Claims for
Argumentative Writing Support [20.905660642919052]
We explore the main challenges to identifying argumentative claims in need of specific revisions.
We propose a new sampling strategy based on revision distance.
We provide evidence that using contextual information and domain knowledge can further improve prediction results.
arXiv Detail & Related papers (2023-05-26T10:19:54Z) - Reasoning over Logically Interacted Conditions for Question Answering [113.9231035680578]
We study a more challenging task where answers are constrained by a list of conditions that logically interact.
We propose a new model, TReasoner, for this challenging reasoning task.
TReasoner achieves state-of-the-art performance on two benchmark conditional QA datasets.
arXiv Detail & Related papers (2022-05-25T16:41:39Z) - Learning to Revise References for Faithful Summarization [10.795263196202159]
We propose a new approach to improve reference quality while retaining all data.
We construct synthetic unsupported alternatives to supported sentences and use contrastive learning to discourage/encourage (un)faithful revisions.
We extract a small corpus from a noisy source--the Electronic Health Record (EHR)--for the task of summarizing a hospital admission from multiple notes.
arXiv Detail & Related papers (2022-04-13T18:54:19Z) - On Limited Non-Prioritised Belief Revision Operators with Dynamic Scope [2.7071541526963805]
We introduce the concept of dynamic-limited revision, which are revisions expressible by a total preorder over a limited set of worlds.
For a belief change operator, we consider the scope, which consists of those beliefs which yield success of revision.
We show that for each set satisfying single sentence closure and disjunction completeness there exists a dynamic-limited revision having the union of this set with the beliefs set as scope.
arXiv Detail & Related papers (2021-08-17T17:22:29Z) - Deep Just-In-Time Inconsistency Detection Between Comments and Source
Code [51.00904399653609]
In this paper, we aim to detect whether a comment becomes inconsistent as a result of changes to the corresponding body of code.
We develop a deep-learning approach that learns to correlate a comment with code changes.
We show the usefulness of our approach by combining it with a comment update model to build a more comprehensive automatic comment maintenance system.
arXiv Detail & Related papers (2020-10-04T16:49:28Z) - Descriptor Revision for Conditionals: Literal Descriptors and
Conditional Preservation [2.580765958706854]
Descriptor revision is a framework for addressing the problem of belief change.
In this article, we investigate the realisation of descriptor revision for a conditional logic.
We show how descriptor revision for conditionals can be characterised by a constraint satisfaction problem.
arXiv Detail & Related papers (2020-06-02T08:21:33Z) - Fact-aware Sentence Split and Rephrase with Permutation Invariant
Training [93.66323661321113]
Sentence Split and Rephrase aims to break down a complex sentence into several simple sentences with its meaning preserved.
Previous studies tend to address the issue by seq2seq learning from parallel sentence pairs.
We introduce Permutation Training to verifies the effects of order variance in seq2seq learning for this task.
arXiv Detail & Related papers (2020-01-16T07:30:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.