Parallel Belief Revision via Order Aggregation
- URL: http://arxiv.org/abs/2505.13914v1
- Date: Tue, 20 May 2025 04:26:01 GMT
- Title: Parallel Belief Revision via Order Aggregation
- Authors: Jake Chandler, Richard Booth,
- Abstract summary: This paper offers a method for extending serial iterated belief revision operators to handle parallel change.<n>Based on a family of order aggregators known as TeamQueue aggregators, it provides a principled way to recover the independently plausible properties.
- Score: 1.474723404975345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite efforts to better understand the constraints that operate on single-step parallel (aka "package", "multiple") revision, very little work has been carried out on how to extend the model to the iterated case. A recent paper by Delgrande & Jin outlines a range of relevant rationality postulates. While many of these are plausible, they lack an underlying unifying explanation. We draw on recent work on iterated parallel contraction to offer a general method for extending serial iterated belief revision operators to handle parallel change. This method, based on a family of order aggregators known as TeamQueue aggregators, provides a principled way to recover the independently plausible properties that can be found in the literature, without yielding the more dubious ones.
Related papers
- FoldA: Computing Partial-Order Alignments Using Directed Net Unfoldings [0.6906005491572401]
This paper proposes a new technique for computing partial-order alignments on the fly using directed Petri net unfoldings, named FoldA.<n>We evaluate our technique on 485 synthetic model-log pairs and compare it against Astar- and Dijkstra-alignments on 13 real-life model-log pairs and 6 benchmark pairs.
arXiv Detail & Related papers (2025-06-10T09:44:05Z) - Parallel Belief Contraction via Order Aggregation [1.474723404975345]
We consider how to extend serial contraction operations that obey stronger properties.<n>We also consider the iterated case: the behaviour of beliefs after a sequence of parallel contractions.<n>We propose a general method for extending serial iterated belief change operators to handle parallel change.
arXiv Detail & Related papers (2025-01-23T00:42:16Z) - Consecutive Batch Model Editing with HooK Layers [59.673084839708224]
CoachHooK is a model editing method that simultaneously supports sequential and batch editing.
It is memory-friendly as it only needs a small amount of it to store several hook layers whose size remains unchanged over time.
arXiv Detail & Related papers (2024-03-08T14:07:44Z) - Vanishing Feature: Diagnosing Model Merging and Beyond [1.1510009152620668]
We identify the vanishing feature'' phenomenon, where input-induced features diminish during propagation through a merged model.<n>We show that existing normalization strategies can be enhanced by precisely targeting the vanishing feature issue.<n>We propose the Preserve-First Merging'' (PFM) strategy, which focuses on preserving early-layer features.
arXiv Detail & Related papers (2024-02-05T17:06:26Z) - PEneo: Unifying Line Extraction, Line Grouping, and Entity Linking for End-to-end Document Pair Extraction [28.205723817300576]
Document pair extraction aims to identify key and value entities as well as their relationships from visually-rich documents.
Most existing methods divide it into two separate tasks: semantic entity recognition (SER) and relation extraction (RE)
This paper introduces a novel framework, PEneo, which performs document pair extraction in a unified pipeline.
arXiv Detail & Related papers (2024-01-07T12:48:07Z) - HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision [118.0818807474809]
This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision.
Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document.
arXiv Detail & Related papers (2023-05-23T16:53:49Z) - A Critique of Strictly Batch Imitation Learning [26.121994149869767]
We argue that notational issues obscure how the psuedo-state visitation distribution might be disconnected from the policy's $textittrue$ state visitation distribution.
We construct examples where the parameter coupling advocated by Jarrett et al. leads to inconsistent estimates of the expert's policy, unlike behavioral cloning.
arXiv Detail & Related papers (2021-10-05T14:07:30Z) - Joint Passage Ranking for Diverse Multi-Answer Retrieval [56.43443577137929]
We study multi-answer retrieval, an under-explored problem that requires retrieving passages to cover multiple distinct answers for a question.
This task requires joint modeling of retrieved passages, as models should not repeatedly retrieve passages containing the same answer at the cost of missing a different valid answer.
In this paper, we introduce JPR, a joint passage retrieval model focusing on reranking. To model the joint probability of the retrieved passages, JPR makes use of an autoregressive reranker that selects a sequence of passages, equipped with novel training and decoding algorithms.
arXiv Detail & Related papers (2021-04-17T04:48:36Z) - The Extraordinary Failure of Complement Coercion Crowdsourcing [50.599433903377374]
Crowdsourcing has eased and scaled up the collection of linguistic annotation in recent years.
We aim to collect annotated data for this phenomenon by reducing it to either of two known tasks: Explicit Completion and Natural Language Inference.
In both cases, crowdsourcing resulted in low agreement scores, even though we followed the same methodologies as in previous work.
arXiv Detail & Related papers (2020-10-12T19:04:04Z) - Incomplete Utterance Rewriting as Semantic Segmentation [57.13577518412252]
We present a novel and extensive approach, which formulates it as a semantic segmentation task.
Instead of generating from scratch, such a formulation introduces edit operations and shapes the problem as prediction of a word-level edit matrix.
Our approach is four times faster than the standard approach in inference.
arXiv Detail & Related papers (2020-09-28T09:29:49Z) - Iterative Edit-Based Unsupervised Sentence Simplification [30.128553647491817]
Our model is guided by a scoring function involving fluency, simplicity, and meaning preservation.
We iteratively perform word and phrase-level edits on the complex sentence.
arXiv Detail & Related papers (2020-06-17T03:53:12Z) - Lower bounds in multiple testing: A framework based on derandomized
proxies [107.69746750639584]
This paper introduces an analysis strategy based on derandomization, illustrated by applications to various concrete models.
We provide numerical simulations of some of these lower bounds, and show a close relation to the actual performance of the Benjamini-Hochberg (BH) algorithm.
arXiv Detail & Related papers (2020-05-07T19:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.