DomiKnowS: A Library for Integration of Symbolic Domain Knowledge in
Deep Learning
- URL: http://arxiv.org/abs/2108.12370v1
- Date: Fri, 27 Aug 2021 16:06:42 GMT
- Title: DomiKnowS: A Library for Integration of Symbolic Domain Knowledge in
Deep Learning
- Authors: Hossein Rajaby Faghihi, Quan Guo, Andrzej Uszok, Aliakbar Nafar,
Elaheh Raisi, and Parisa Kordjamshidi
- Abstract summary: We demonstrate a library for the integration of domain knowledge in deep learning architectures.
Using this library, the structure of the data is expressed symbolically via graph declarations.
The domain knowledge can be defined explicitly, which improves the models' explainability.
- Score: 12.122347427933637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We demonstrate a library for the integration of domain knowledge in deep
learning architectures. Using this library, the structure of the data is
expressed symbolically via graph declarations and the logical constraints over
outputs or latent variables can be seamlessly added to the deep models. The
domain knowledge can be defined explicitly, which improves the models'
explainability in addition to the performance and generalizability in the
low-data regime. Several approaches for such an integration of symbolic and
sub-symbolic models have been introduced; however, there is no library to
facilitate the programming for such an integration in a generic way while
various underlying algorithms can be used. Our library aims to simplify
programming for such an integration in both training and inference phases while
separating the knowledge representation from learning algorithms. We showcase
various NLP benchmark tasks and beyond. The framework is publicly available at
Github(https://github.com/HLR/DomiKnowS).
Related papers
- Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning [70.64617500380287]
Continual learning allows models to learn from new data while retaining previously learned knowledge.
The semantic knowledge available in the label information of the images, offers important semantic information that can be related with previously acquired knowledge of semantic classes.
We propose integrating semantic guidance within and across tasks by capturing semantic similarity using text embeddings.
arXiv Detail & Related papers (2024-08-02T07:51:44Z) - pyGSL: A Graph Structure Learning Toolkit [14.000763778781547]
pyGSL is a Python library that provides efficient implementations of state-of-the-art graph structure learning models.
pyGSL is written in GPU-friendly ways, allowing one to scale to much larger network tasks.
arXiv Detail & Related papers (2022-11-07T14:23:10Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - Joint Language Semantic and Structure Embedding for Knowledge Graph
Completion [66.15933600765835]
We propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information.
Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models.
Our experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method.
arXiv Detail & Related papers (2022-09-19T02:41:02Z) - CLEVR Parser: A Graph Parser Library for Geometric Learning on Language
Grounded Image Scenes [2.750124853532831]
CLEVR dataset has been used extensively in language grounded visual reasoning in Machine Learning (ML) and Natural Language Processing (NLP) domains.
We present a graph library for CLEVR that provides functionalities for object-centric attributes and relationships extraction, and construction of structural graph representations for dual modalities.
We discuss downstream usage and applications of the library, and how it accelerates research for the NLP research community.
arXiv Detail & Related papers (2020-09-19T03:32:37Z) - Captum: A unified and generic model interpretability library for PyTorch [49.72749684393332]
We introduce a novel, unified, open-source model interpretability library for PyTorch.
The library contains generic implementations of a number of gradient and perturbation-based attribution algorithms.
It can be used for both classification and non-classification models.
arXiv Detail & Related papers (2020-09-16T18:57:57Z) - Synbols: Probing Learning Algorithms with Synthetic Datasets [112.45883250213272]
Synbols is a tool for rapidly generating new datasets with a rich composition of latent features rendered in low resolution images.
Our tool's high-level interface provides a language for rapidly generating new distributions on the latent features.
To showcase the versatility of Synbols, we use it to dissect the limitations and flaws in standard learning algorithms in various learning setups.
arXiv Detail & Related papers (2020-09-14T13:03:27Z) - Torch-Struct: Deep Structured Prediction Library [138.5262350501951]
We introduce Torch-Struct, a library for structured prediction.
Torch-Struct includes a broad collection of probabilistic structures accessed through a simple and flexible distribution-based API.
arXiv Detail & Related papers (2020-02-03T16:43:02Z) - Incorporating Joint Embeddings into Goal-Oriented Dialogues with
Multi-Task Learning [8.662586355051014]
We propose an RNN-based end-to-end encoder-decoder architecture which is trained with joint embeddings of the knowledge graph and the corpus as input.
The model provides an additional integration of user intent along with text generation, trained with a multi-task learning paradigm.
arXiv Detail & Related papers (2020-01-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.