Model-theoretic Characterizations of Existential Rule Languages
- URL: http://arxiv.org/abs/2001.08688v1
- Date: Thu, 23 Jan 2020 17:29:18 GMT
- Title: Model-theoretic Characterizations of Existential Rule Languages
- Authors: Heng Zhang, Yan Zhang, Guifei Jiang
- Abstract summary: Existential rules, a.k.a. dependencies in databases, are a family of important logical languages widely used in computer science and artificial intelligence.
We establish model-theoretic characterizations for a number of existential rule languages such as (disjunctive) embedded dependencies,generating dependencies (TGDs), (frontier-)guarded TGDs and linear TGDs.
As a natural application of these characterizations, complexity bounds for the rewritability of above languages are also identified.
- Score: 9.845164265154832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existential rules, a.k.a. dependencies in databases, and Datalog+/- in
knowledge representation and reasoning recently, are a family of important
logical languages widely used in computer science and artificial intelligence.
Towards a deep understanding of these languages in model theory, we establish
model-theoretic characterizations for a number of existential rule languages
such as (disjunctive) embedded dependencies, tuple-generating dependencies
(TGDs), (frontier-)guarded TGDs and linear TGDs. All these characterizations
hold for arbitrary structures, and most of them also work on the class of
finite structures. As a natural application of these characterizations,
complexity bounds for the rewritability of above languages are also identified.
Related papers
- NLAS-multi: A Multilingual Corpus of Automatically Generated Natural
Language Argumentation Schemes [4.015890309289342]
We present an effective methodology for the automatic generation of natural language arguments in different topics and languages.
We also present a set of solid baselines and fine-tuned models for the automatic identification of argumentation schemes.
arXiv Detail & Related papers (2024-02-22T11:31:50Z) - How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering [52.86931192259096]
Knowledge Base Question Answering (KBQA) aims to answer natural language questions based on facts in knowledge bases.
Recent works leverage the capabilities of large language models (LLMs) for logical form generation to improve performance.
arXiv Detail & Related papers (2024-01-11T09:27:50Z) - Formal Aspects of Language Modeling [74.16212987886013]
Large language models have become one of the most commonly deployed NLP inventions.
These notes are the accompaniment to the theoretical portion of the ETH Z"urich course on large language models.
arXiv Detail & Related papers (2023-11-07T20:21:42Z) - Parrot Mind: Towards Explaining the Complex Task Reasoning of Pretrained Large Language Models with Template-Content Structure [66.33623392497599]
We show that a structure called template-content structure (T-C structure) can reduce the possible space from exponential level to linear level.
We demonstrate that models can achieve task composition, further reducing the space needed to learn from linear to logarithmic.
arXiv Detail & Related papers (2023-10-09T06:57:45Z) - Physics of Language Models: Part 1, Learning Hierarchical Language Structures [51.68385617116854]
Transformer-based language models are effective but complex, and understanding their inner workings is a significant challenge.
We introduce a family of synthetic CFGs that produce hierarchical rules, capable of generating lengthy sentences.
We demonstrate that generative models like GPT can accurately learn this CFG language and generate sentences based on it.
arXiv Detail & Related papers (2023-05-23T04:28:16Z) - Formal Specifications from Natural Language [3.1806743741013657]
We study the ability of language models to translate natural language into formal specifications with complex semantics.
In particular, we fine-tune off-the-shelf language models on three datasets consisting of structured English sentences.
arXiv Detail & Related papers (2022-06-04T10:49:30Z) - Characterizing the Program Expressive Power of Existential Rule
Languages [4.38078043834754]
Existential rule languages have been widely used in in-mediated query answering (OMQA)
The expressive power of representing domain knowledge for OMQA, known as the program expressive power, is not well-understood yet.
In this paper, we establish a number of novel characterizations for the program expressive power of several important existential rule languages.
arXiv Detail & Related papers (2021-12-15T14:08:38Z) - Modelling Compositionality and Structure Dependence in Natural Language [0.12183405753834563]
Drawing on linguistics and set theory, a formalisation of these ideas is presented in the first half of this thesis.
We see how cognitive systems that process language need to have certain functional constraints.
Using the advances of word embedding techniques, a model of relational learning is simulated.
arXiv Detail & Related papers (2020-11-22T17:28:50Z) - Linguistic Typology Features from Text: Inferring the Sparse Features of
World Atlas of Language Structures [73.06435180872293]
We construct a recurrent neural network predictor based on byte embeddings and convolutional layers.
We show that some features from various linguistic types can be predicted reliably.
arXiv Detail & Related papers (2020-04-30T21:00:53Z) - Evaluating Transformer-Based Multilingual Text Classification [55.53547556060537]
We argue that NLP tools perform unequally across languages with different syntactic and morphological structures.
We calculate word order and morphological similarity indices to aid our empirical study.
arXiv Detail & Related papers (2020-04-29T03:34:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.