On Limited Non-Prioritised Belief Revision Operators with Dynamic Scope
- URL: http://arxiv.org/abs/2108.07769v1
- Date: Tue, 17 Aug 2021 17:22:29 GMT
- Title: On Limited Non-Prioritised Belief Revision Operators with Dynamic Scope
- Authors: Kai Sauerwald and Gabriele Kern-Isberner and Christoph Beierle
- Abstract summary: We introduce the concept of dynamic-limited revision, which are revisions expressible by a total preorder over a limited set of worlds.
For a belief change operator, we consider the scope, which consists of those beliefs which yield success of revision.
We show that for each set satisfying single sentence closure and disjunction completeness there exists a dynamic-limited revision having the union of this set with the beliefs set as scope.
- Score: 2.7071541526963805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The research on non-prioritized revision studies revision operators which do
not accept all new beliefs. In this paper, we contribute to this line of
research by introducing the concept of dynamic-limited revision, which are
revisions expressible by a total preorder over a limited set of worlds. For a
belief change operator, we consider the scope, which consists of those beliefs
which yield success of revision. We show that for each set satisfying single
sentence closure and disjunction completeness there exists a dynamic-limited
revision having the union of this set with the beliefs set as scope. We
investigate iteration postulates for belief and scope dynamics and characterise
them for dynamic-limited revision. As an application, we employ dynamic-limited
revision to studying belief revision in the context of so-called inherent
beliefs, which are beliefs globally accepted by the agent. This leads to
revision operators which we call inherence-limited. We present a representation
theorem for inherence-limited revision, and we compare these operators and
dynamic-limited revision with the closely related credible-limited revision
operators.
Related papers
- Credibility-Limited Revision for Epistemic Spaces [0.0]
We extend the class of credibility-limited revision operators in a way that all AGM revision operators are included.
A semantic characterization of extended credibility-limited revision operators that employ total preorders on possible worlds is presented.
arXiv Detail & Related papers (2024-09-11T09:15:43Z) - Deep Backtracking Counterfactuals for Causally Compliant Explanations [57.94160431716524]
We introduce a practical method called deep backtracking counterfactuals (DeepBC) for computing backtracking counterfactuals in structural causal models.
As a special case, our formulation reduces to methods in the field of counterfactual explanations.
arXiv Detail & Related papers (2023-10-11T17:11:10Z) - SCREWS: A Modular Framework for Reasoning with Revisions [58.698199183147935]
We present SCREWS, a modular framework for reasoning with revisions.
We show that SCREWS unifies several previous approaches under a common framework.
We evaluate our framework with state-of-the-art LLMs on a diverse set of reasoning tasks.
arXiv Detail & Related papers (2023-09-20T15:59:54Z) - System of Spheres-based Two Level Credibility-limited Revisions [0.0]
When revising by a two level credibility-limited revision, two levels of credibility and one level of incredibility are considered.
We propose a construction for two level credibility-limited revision operators based on Grove's systems of spheres.
arXiv Detail & Related papers (2023-07-11T07:10:39Z) - Hallucinated Adversarial Control for Conservative Offline Policy
Evaluation [64.94009515033984]
We study the problem of conservative off-policy evaluation (COPE) where given an offline dataset of environment interactions, we seek to obtain a (tight) lower bound on a policy's performance.
We introduce HAMBO, which builds on an uncertainty-aware learned model of the transition dynamics.
We prove that the resulting COPE estimates are valid lower bounds, and, under regularity conditions, show their convergence to the true expected return.
arXiv Detail & Related papers (2023-03-02T08:57:35Z) - Conservative-Progressive Collaborative Learning for Semi-supervised
Semantic Segmentation [50.51992191965432]
We propose a novel learning approach, called Conservative-Progressive Collaborative Learning (CPCL), among which two predictive networks are trained in parallel.
One network seeks common ground via intersection supervision and is supervised by the high-quality labels to ensure a more reliable supervision.
The other network reserves differences via union supervision and is supervised by all the pseudo labels to keep exploring with curiosity.
arXiv Detail & Related papers (2022-11-30T02:47:25Z) - GroupifyVAE: from Group-based Definition to VAE-based Unsupervised
Representation Disentanglement [91.9003001845855]
VAE-based unsupervised disentanglement can not be achieved without introducing other inductive bias.
We address VAE-based unsupervised disentanglement by leveraging the constraints derived from the Group Theory based definition as the non-probabilistic inductive bias.
We train 1800 models covering the most prominent VAE-based models on five datasets to verify the effectiveness of our method.
arXiv Detail & Related papers (2021-02-20T09:49:51Z) - On the use of evidence theory in belief base revision [0.0]
We propose the idea of credible belief base revision yielding to define two new formula-based revision operators.
These operators stem from consistent subbases maximal with respect to credibility instead of set inclusion and cardinality.
arXiv Detail & Related papers (2020-09-24T12:45:32Z) - Mitigating Manipulation in Peer Review via Randomized Reviewer
Assignments [96.114824979298]
Three important challenges in conference peer review are maliciously attempting to get assigned to certain papers and "torpedo reviewing"
We present a framework that brings all these challenges under a common umbrella and present a (randomized) algorithm for reviewer assignment.
Our algorithms can limit the chance that any malicious reviewer gets assigned to their desired paper to 50% while producing assignments with over 90% of the total optimal similarity.
arXiv Detail & Related papers (2020-06-29T23:55:53Z) - Revision by Conditionals: From Hook to Arrow [2.9005223064604078]
We introduce a 'plug and play' method for extending any iterated belief revision operator to the conditional case.
The flexibility of our approach is achieved by having the result of a conditional revision determined by that of a plain revision by its corresponding material conditional.
arXiv Detail & Related papers (2020-06-29T05:12:30Z) - Belief Base Revision for Further Improvement of Unified Answer Set
Programming [0.0]
The base revision operator is developed using Removed Set Revision strategy.
The operator is characterized by respect to the postulates for base revisions operator satisfies.
arXiv Detail & Related papers (2020-02-27T08:31:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.