Representing states in iterated belief revision
- URL: http://arxiv.org/abs/2305.09200v2
- Date: Fri, 23 Feb 2024 16:45:05 GMT
- Title: Representing states in iterated belief revision
- Authors: Paolo Liberatore
- Abstract summary: Iterated belief revision requires information about the current beliefs.
Most literature concentrates on how to revise a doxastic state and neglects that it may exponentially grow.
This problem is studied for the most common ways of storing a doxastic state.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Iterated belief revision requires information about the current beliefs. This
information is represented by mathematical structures called doxastic states.
Most literature concentrates on how to revise a doxastic state and neglects
that it may exponentially grow. This problem is studied for the most common
ways of storing a doxastic state. All four methods are able to store every
doxastic state, but some do it in less space than others. In particular, the
explicit representation (an enumeration of the current beliefs) is the more
wasteful on space. The level representation (a sequence of propositional
formulae) and the natural representation (a history of natural revisions) are
more compact than it. The lexicographic representation (a history of
lexicographic revision) is even more compact than them.
Related papers
- Simplifying the Theory on Over-Smoothing [0.27195102129095]
Graph convolutions cause over-smoothing, which refers to representations becoming more similar with increased depth.
This paper attempts to align these directions by showing that over-smoothing is merely a special case of power.
arXiv Detail & Related papers (2024-07-16T16:00:42Z) - Skews in the Phenomenon Space Hinder Generalization in Text-to-Image Generation [59.138470433237615]
We introduce statistical metrics that quantify both the linguistic and visual skew of a dataset for relational learning.
We show that systematically controlled metrics are strongly predictive of generalization performance.
This work informs an important direction towards quality-enhancing the data diversity or balance to scaling up the absolute size.
arXiv Detail & Related papers (2024-03-25T03:18:39Z) - On the structure of Completely Reducible States [0.0]
The complete reducibility property for bipartite states was used to prove several theorems inside and outside entanglement theory.
So far only three types of bipartite states were proved to possess this property.
arXiv Detail & Related papers (2024-03-08T16:55:57Z) - Can we forget how we learned? Doxastic redundancy in iterated belief
revision [0.0]
How information was acquired may become irrelevant.
Sometimes, a revision becomes redundant even in presence of none equal, or even no else implying it.
Shortening sequences of lexicographic revisions is shortening the most compact representations of iterated belief revision states.
arXiv Detail & Related papers (2024-02-23T17:09:04Z) - Entity Disambiguation with Entity Definitions [50.01142092276296]
Local models have recently attained astounding performances in Entity Disambiguation (ED)
Previous works limited their studies to using, as the textual representation of each candidate, only its Wikipedia title.
In this paper, we address this limitation and investigate to what extent more expressive textual representations can mitigate it.
We report a new state of the art on 2 out of 6 benchmarks we consider and strongly improve the generalization capability over unseen patterns.
arXiv Detail & Related papers (2022-10-11T17:46:28Z) - The quantum commuting model (Ia): The CHSH game and other examples:
Uniqueness of optimal states [91.3755431537592]
We use the universal description of quantum commuting correlations as state space on the universal algebra for two player games.
We find that the CHSH game leaves a single optimal state on this common algebra.
arXiv Detail & Related papers (2022-10-07T17:38:31Z) - Latent Topology Induction for Understanding Contextualized
Representations [84.7918739062235]
We study the representation space of contextualized embeddings and gain insight into the hidden topology of large language models.
We show there exists a network of latent states that summarize linguistic properties of contextualized representations.
arXiv Detail & Related papers (2022-06-03T11:22:48Z) - On the Generalization of Representations in Reinforcement Learning [32.303656009679045]
We provide an informative bound on the generalization error arising from a specific state representation.
Our bound applies to any state representation and quantifies the natural tension between representations that generalize well and those that approximate well.
arXiv Detail & Related papers (2022-03-01T15:22:09Z) - Image Collation: Matching illustrations in manuscripts [76.21388548732284]
We introduce the task of illustration collation and a large annotated public dataset to evaluate solutions.
We analyze state of the art similarity measures for this task and show that they succeed in simple cases but struggle for large manuscripts.
We show clear evidence that significant performance boosts can be expected by exploiting cycle-consistent correspondences.
arXiv Detail & Related papers (2021-08-18T12:12:14Z) - Towards Better Laplacian Representation in Reinforcement Learning with
Generalized Graph Drawing [88.22538267731733]
The Laplacian representation provides succinct and informative representation for states.
Recent works propose to minimize a spectral graph drawing objective, which however has infinitely many global minimizers other than the eigenvectors.
We show that our learned Laplacian representations lead to more exploratory options and better reward shaping.
arXiv Detail & Related papers (2021-07-12T16:14:02Z) - Compact Belief State Representation for Task Planning [8.521089975868131]
We develop a novel belief state representation based on cartesian product and union operations over belief substates.
These two operations and single variable assignment nodes form And-Or directed acyclic graph of Belief State (AOBS)
We show that AOBS representation is not only much more compact than a full belief state but it also scales better than Binary Decision Diagrams for most of the cases.
arXiv Detail & Related papers (2020-08-21T09:38:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.