Alternating Fixpoint Operator for Hybrid MKNF Knowledge Bases as an
Approximator of AFT
- URL: http://arxiv.org/abs/2105.11071v1
- Date: Mon, 24 May 2021 02:32:51 GMT
- Title: Alternating Fixpoint Operator for Hybrid MKNF Knowledge Bases as an
Approximator of AFT
- Authors: Fangfang Liu and Jia-huai You
- Abstract summary: We show that Knorr et al.'s study of the well-founded semantics for hybrid MKNF knowledge bases is in fact an approximator of AFT in disguise.
We show an improved approximator for these knowledge bases, of which the least stable fixpoint is information richer than the one formulated from Knorr et al.'s construction.
This work is built on an extension of AFT that supports consistent as well as inconsistent pairs in the induced product bilattice.
- Score: 10.843231120912757
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Approximation fixpoint theory (AFT) provides an algebraic framework for the
study of fixpoints of operators on bilattices and has found its applications in
characterizing semantics for various classes of logic programs and nonmonotonic
languages. In this paper, we show one more application of this kind: the
alternating fixpoint operator by Knorr et al. for the study of the well-founded
semantics for hybrid MKNF knowledge bases is in fact an approximator of AFT in
disguise, which, thanks to the power of abstraction of AFT, characterizes not
only the well-founded semantics but also two-valued as well as three-valued
semantics for hybrid MKNF knowledge bases. Furthermore, we show an improved
approximator for these knowledge bases, of which the least stable fixpoint is
information richer than the one formulated from Knorr et al.'s construction.
This leads to an improved computation for the well-founded semantics. This work
is built on an extension of AFT that supports consistent as well as
inconsistent pairs in the induced product bilattice, to deal with
inconsistencies that arise in the context of hybrid MKNF knowledge bases. This
part of the work can be considered generalizing the original AFT from symmetric
approximators to arbitrary approximators.
Related papers
- LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Eliminating Unintended Stable Fixpoints for Hybrid Reasoning Systems [5.208405959764274]
We introduce a methodology resembling AFT that can utilize priorly computed upper bounds to more precisely capture semantics.
We demonstrate our framework's applicability to hybrid MKNF (minimal knowledge and negation as failure) knowledge bases by extending the state-of-the-art approximator.
arXiv Detail & Related papers (2023-07-21T01:08:15Z) - Equivalence Between SE(3) Equivariant Networks via Steerable Kernels and
Group Convolution [90.67482899242093]
A wide range of techniques have been proposed in recent years for designing neural networks for 3D data that are equivariant under rotation and translation of the input.
We provide an in-depth analysis of both methods and their equivalence and relate the two constructions to multiview convolutional networks.
We also derive new TFN non-linearities from our equivalence principle and test them on practical benchmark datasets.
arXiv Detail & Related papers (2022-11-29T03:42:11Z) - Robust Manifold Nonnegative Tucker Factorization for Tensor Data
Representation [44.845291873747335]
Nonnegative Tucker Factorization (NTF) minimizes the euclidean distance or Kullback-Leibler divergence between the original data and its low-rank approximation.
NTF suffers from rotational ambiguity, whose solutions with and without rotation transformations are equally in the sense of yielding the maximum likelihood.
We propose three Robust Manifold NTF algorithms to handle outliers by incorporating structural knowledge about the outliers.
arXiv Detail & Related papers (2022-11-08T01:16:21Z) - A Fixpoint Characterization of Three-Valued Disjunctive Hybrid MKNF
Knowledge Bases [0.0]
We present a fixpoint construction that leverages head-cuts using an operator that iteratively captures three-valued models of disjunctive hybrid MKNF knowledge bases with disjunctive rules.
This work also captures partial stable models of disjunctive logic programs since a program can be expressed as a disjunctive hybrid MKNF knowledge base with an empty answer.
arXiv Detail & Related papers (2022-08-05T10:47:07Z) - Supporting Vision-Language Model Inference with Confounder-pruning Knowledge Prompt [71.77504700496004]
Vision-language models are pre-trained by aligning image-text pairs in a common space to deal with open-set visual concepts.
To boost the transferability of the pre-trained models, recent works adopt fixed or learnable prompts.
However, how and what prompts can improve inference performance remains unclear.
arXiv Detail & Related papers (2022-05-23T07:51:15Z) - Maximum Batch Frobenius Norm for Multi-Domain Text Classification [19.393393465837377]
We propose a maximum batch Frobenius norm (MBF) method to boost the feature discriminability for multi-domain text classification.
Experiments on two MDTC benchmarks show that our MBF approach can effectively advance the performance of the state-of-the-art.
arXiv Detail & Related papers (2022-01-29T14:37:56Z) - First Power Linear Unit with Sign [0.0]
It is enlightened by common inverse operation while endowed with an intuitive meaning of bionics.
We extend the function presented to a more generalized type called PFPLUS with two parameters that can be fixed or learnable.
arXiv Detail & Related papers (2021-11-29T06:47:58Z) - Finite-Function-Encoding Quantum States [52.77024349608834]
We introduce finite-function-encoding (FFE) states which encode arbitrary $d$-valued logic functions.
We investigate some of their structural properties.
arXiv Detail & Related papers (2020-12-01T13:53:23Z) - Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees [49.91477656517431]
Quantization-based solvers have been widely adopted in Federated Learning (FL)
No existing methods enjoy all the aforementioned properties.
We propose an intuitively-simple yet theoretically-simple method based on SIGNSGD to bridge the gap.
arXiv Detail & Related papers (2020-02-25T15:12:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.