Not all tickets are equal and we know it: Guiding pruning with
domain-specific knowledge
- URL: http://arxiv.org/abs/2403.04805v1
- Date: Tue, 5 Mar 2024 23:02:55 GMT
- Title: Not all tickets are equal and we know it: Guiding pruning with
domain-specific knowledge
- Authors: Intekhab Hossain, Jonas Fischer, Rebekka Burkholz, John Quackenbush
- Abstract summary: We propose DASH, which guides pruning by available domain-specific structural information.
In the context of learning dynamic gene regulatory network models, we show that DASH combined with existing general knowledge on interaction partners provides data-specific insights aligned with biology.
- Score: 26.950765295157897
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural structure learning is of paramount importance for scientific discovery
and interpretability. Yet, contemporary pruning algorithms that focus on
computational resource efficiency face algorithmic barriers to select a
meaningful model that aligns with domain expertise. To mitigate this challenge,
we propose DASH, which guides pruning by available domain-specific structural
information. In the context of learning dynamic gene regulatory network models,
we show that DASH combined with existing general knowledge on interaction
partners provides data-specific insights aligned with biology. For this task,
we show on synthetic data with ground truth information and two real world
applications the effectiveness of DASH, which outperforms competing methods by
a large margin and provides more meaningful biological insights. Our work shows
that domain specific structural information bears the potential to improve
model-derived scientific insights.
Related papers
- Iterative Zero-Shot LLM Prompting for Knowledge Graph Construction [104.29108668347727]
This paper proposes an innovative knowledge graph generation approach that leverages the potential of the latest generative large language models.
The approach is conveyed in a pipeline that comprises novel iterative zero-shot and external knowledge-agnostic strategies.
We claim that our proposal is a suitable solution for scalable and versatile knowledge graph construction and may be applied to different and novel contexts.
arXiv Detail & Related papers (2023-07-03T16:01:45Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Learning the Finer Things: Bayesian Structure Learning at the
Instantiation Level [0.0]
Successful machine learning methods require a trade-off between memorization and generalization.
We present a novel probabilistic graphical model structure learning approach that can learn, generalize and explain in elusive domains.
arXiv Detail & Related papers (2023-03-08T02:31:49Z) - Coarse-to-fine Knowledge Graph Domain Adaptation based on
Distantly-supervised Iterative Training [12.62127290494378]
We propose an integrated framework for adapting and re-learning knowledge graphs.
No manual data annotation is required to train the model.
We introduce a novel iterative training strategy to facilitate the discovery of domain-specific named entities and triples.
arXiv Detail & Related papers (2022-11-05T08:16:38Z) - Joint Language Semantic and Structure Embedding for Knowledge Graph
Completion [66.15933600765835]
We propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information.
Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models.
Our experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method.
arXiv Detail & Related papers (2022-09-19T02:41:02Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - Incorporation of Deep Neural Network & Reinforcement Learning with
Domain Knowledge [0.0]
We present a study of the manners by which Domain information has been incorporated when building models with Neural Networks.
Integrating space data is uniquely important to the development of Knowledge understanding model, as well as other fields that aid in understanding information by utilizing the human-machine interface and Reinforcement Learning.
arXiv Detail & Related papers (2021-07-29T17:29:02Z) - How to Tell Deep Neural Networks What We Know [2.2186394337073527]
This paper examines the inclusion of domain-knowledge by means of changes to: the input, the loss-function, and the architecture of deep networks.
In each category, we describe techniques that have been shown to yield significant changes in network performance.
arXiv Detail & Related papers (2021-07-21T18:18:02Z) - A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for
Question Answering Over Dynamic Contexts [81.4757750425247]
We study question answering over a dynamic textual environment.
We develop a graph neural network over the constructed graph, and train the model in an end-to-end manner.
arXiv Detail & Related papers (2020-04-25T04:53:54Z) - Domain Adaption for Knowledge Tracing [65.86619804954283]
We propose a novel adaptable framework, namely knowledge tracing (AKT) to address the DAKT problem.
For the first aspect, we incorporate the educational characteristics (e.g., slip, guess, question texts) based on the deep knowledge tracing (DKT) to obtain a good performed knowledge tracing model.
For the second aspect, we propose and adopt three domain adaptation processes. First, we pre-train an auto-encoder to select useful source instances for target model training.
arXiv Detail & Related papers (2020-01-14T15:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.