Eliminating Unintended Stable Fixpoints for Hybrid Reasoning Systems
- URL: http://arxiv.org/abs/2307.11286v1
- Date: Fri, 21 Jul 2023 01:08:15 GMT
- Title: Eliminating Unintended Stable Fixpoints for Hybrid Reasoning Systems
- Authors: Spencer Killen, Jia-Huai You
- Abstract summary: We introduce a methodology resembling AFT that can utilize priorly computed upper bounds to more precisely capture semantics.
We demonstrate our framework's applicability to hybrid MKNF (minimal knowledge and negation as failure) knowledge bases by extending the state-of-the-art approximator.
- Score: 5.208405959764274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A wide variety of nonmonotonic semantics can be expressed as approximators
defined under AFT (Approximation Fixpoint Theory). Using traditional AFT
theory, it is not possible to define approximators that rely on information
computed in previous iterations of stable revision. However, this information
is rich for semantics that incorporate classical negation into nonmonotonic
reasoning. In this work, we introduce a methodology resembling AFT that can
utilize priorly computed upper bounds to more precisely capture semantics. We
demonstrate our framework's applicability to hybrid MKNF (minimal knowledge and
negation as failure) knowledge bases by extending the state-of-the-art
approximator.
Related papers
- A Canonicalization Perspective on Invariant and Equivariant Learning [54.44572887716977]
We introduce a canonicalization perspective that provides an essential and complete view of the design of frames.
We show that there exists an inherent connection between frames and canonical forms.
We design novel frames for eigenvectors that are strictly superior to existing methods.
arXiv Detail & Related papers (2024-05-28T17:22:15Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Approximate inference of marginals using the IBIA framework [0.0]
Exact inference of marginals in probabilistic graphical models (PGM) is known to be intractable.
We propose a new algorithm for marginal inference that is based on the incremental build-infer-approximate (IBIA) paradigm.
Our method gives either better or comparable accuracy than existing variational and sampling based methods, with smaller runtimes.
arXiv Detail & Related papers (2023-06-01T04:24:21Z) - A Convergence Theory for Federated Average: Beyond Smoothness [28.074273047592065]
Federated learning enables a large amount of edge computing devices to learn a model without data sharing jointly.
As a leading algorithm in this setting, Federated Average FedAvg, which runs Gradient Descent (SGD) in parallel on local devices, has been widely used.
This paper provides a theoretical convergence study on Federated Learning.
arXiv Detail & Related papers (2022-11-03T04:50:49Z) - MaxMatch: Semi-Supervised Learning with Worst-Case Consistency [149.03760479533855]
We propose a worst-case consistency regularization technique for semi-supervised learning (SSL)
We present a generalization bound for SSL consisting of the empirical loss terms observed on labeled and unlabeled training data separately.
Motivated by this bound, we derive an SSL objective that minimizes the largest inconsistency between an original unlabeled sample and its multiple augmented variants.
arXiv Detail & Related papers (2022-09-26T12:04:49Z) - Alternating Fixpoint Operator for Hybrid MKNF Knowledge Bases as an
Approximator of AFT [10.843231120912757]
We show that Knorr et al.'s study of the well-founded semantics for hybrid MKNF knowledge bases is in fact an approximator of AFT in disguise.
We show an improved approximator for these knowledge bases, of which the least stable fixpoint is information richer than the one formulated from Knorr et al.'s construction.
This work is built on an extension of AFT that supports consistent as well as inconsistent pairs in the induced product bilattice.
arXiv Detail & Related papers (2021-05-24T02:32:51Z) - Lower Bounds for Approximate Knowledge Compilation [7.538482310185135]
We focus on circuits in deterministic decomposable negation normal form (d-DNNF)
We formalize two notions of approximation: weak approximation and strong approximation.
We show lower bounds for approximation by d-DNNF, complementing the positive results from the literature.
arXiv Detail & Related papers (2020-11-27T13:11:32Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.