ALIST: Associative Logic for Inference, Storage and Transfer. A Lingua
Franca for Inference on the Web
- URL: http://arxiv.org/abs/2303.06691v1
- Date: Sun, 12 Mar 2023 15:55:56 GMT
- Title: ALIST: Associative Logic for Inference, Storage and Transfer. A Lingua
Franca for Inference on the Web
- Authors: Kwabena Nuamah and Alan Bundy
- Abstract summary: A formalism that abstracts the representation of queries from the specific query language of a knowledge graph.
A representation to dynamically curate data and functions (operations) over diverse knowledge sources.
A demonstration of the expressiveness of alists to represent the diversity of representational formalisms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments in support for constructing knowledge graphs have led to
a rapid rise in their creation both on the Web and within organisations. Added
to existing sources of data, including relational databases, APIs, etc., there
is a strong demand for techniques to query these diverse sources of knowledge.
While formal query languages, such as SPARQL, exist for querying some knowledge
graphs, users are required to know which knowledge graphs they need to query
and the unique resource identifiers of the resources they need. Although
alternative techniques in neural information retrieval embed the content of
knowledge graphs in vector spaces, they fail to provide the representation and
query expressivity needed (e.g. inability to handle non-trivial aggregation
functions such as regression). We believe that a lingua franca, i.e. a
formalism, that enables such representational flexibility will increase the
ability of intelligent automated agents to combine diverse data sources by
inference.
Our work proposes a flexible representation (alists) to support intelligent
federated querying of diverse knowledge sources. Our contribution includes (1)
a formalism that abstracts the representation of queries from the specific
query language of a knowledge graph; (2) a representation to dynamically curate
data and functions (operations) to perform non-trivial inference over diverse
knowledge sources; (3) a demonstration of the expressiveness of alists to
represent the diversity of representational formalisms, including SPARQL
queries, and more generally first-order logic expressions.
Related papers
- A large collection of bioinformatics question-query pairs over federated knowledge graphs: methodology and applications [0.0838491111002084]
We introduce a large collection of human-written natural language questions and their corresponding SPARQL queries over federated bioinformatics knowledge graphs.
We propose a methodology to uniformly represent the examples with minimal metadata, based on existing standards.
arXiv Detail & Related papers (2024-10-08T13:08:07Z) - Contri(e)ve: Context + Retrieve for Scholarly Question Answering [0.0]
We present a two step solution using open source Large Language Model(LLM): Llama3.1 for Scholarly-QALD dataset.
Firstly, we extract the context pertaining to the question from different structured and unstructured data sources.
Secondly, we implement prompt engineering to improve the information retrieval performance of the LLM.
arXiv Detail & Related papers (2024-09-13T17:38:47Z) - GLaM: Fine-Tuning Large Language Models for Domain Knowledge Graph Alignment via Neighborhood Partitioning and Generative Subgraph Encoding [39.67113788660731]
We introduce a framework for developing Graph-aligned LAnguage Models (GLaM)
We demonstrate that grounding the models in specific graph-based knowledge expands the models' capacity for structure-based reasoning.
arXiv Detail & Related papers (2024-02-09T19:53:29Z) - Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems [58.561904356651276]
We introduce the Knowledge-Enhanced Entity Representation Learning (KERL) framework to improve the semantic understanding of entities for Conversational recommender systems.
KERL uses a knowledge graph and a pre-trained language model to improve the semantic understanding of entities.
KERL achieves state-of-the-art results in both recommendation and response generation tasks.
arXiv Detail & Related papers (2023-12-18T06:41:23Z) - Hi-ArG: Exploring the Integration of Hierarchical Argumentation Graphs
in Language Pretraining [62.069374456021016]
We propose the Hierarchical Argumentation Graph (Hi-ArG), a new structure to organize arguments.
We also introduce two approaches to exploit Hi-ArG, including a text-graph multi-modal model GreaseArG and a new pre-training framework augmented with graph information.
arXiv Detail & Related papers (2023-12-01T19:03:38Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Graph Enhanced BERT for Query Understanding [55.90334539898102]
query understanding plays a key role in exploring users' search intents and facilitating users to locate their most desired information.
In recent years, pre-trained language models (PLMs) have advanced various natural language processing tasks.
We propose a novel graph-enhanced pre-training framework, GE-BERT, which can leverage both query content and the query graph.
arXiv Detail & Related papers (2022-04-03T16:50:30Z) - Taxonomy Enrichment with Text and Graph Vector Representations [61.814256012166794]
We address the problem of taxonomy enrichment which aims at adding new words to the existing taxonomy.
We present a new method that allows achieving high results on this task with little effort.
We achieve state-of-the-art results across different datasets and provide an in-depth error analysis of mistakes.
arXiv Detail & Related papers (2022-01-21T09:01:12Z) - BERTese: Learning to Speak to BERT [50.76152500085082]
We propose a method for automatically rewriting queries into "BERTese", a paraphrase query that is directly optimized towards better knowledge extraction.
We empirically show our approach outperforms competing baselines, obviating the need for complex pipelines.
arXiv Detail & Related papers (2021-03-09T10:17:22Z) - Dependently Typed Knowledge Graphs [4.157595789003928]
We show how standardized semantic web technologies (RDF and its query language SPARQL) can be reproduced in a unified manner with dependent type theory.
In addition to providing the basic functionalities of knowledge graphs, dependent types add expressiveness in encoding both entities and queries.
arXiv Detail & Related papers (2020-03-08T14:04:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.