Graph Enabled Cross-Domain Knowledge Transfer
- URL: http://arxiv.org/abs/2304.03452v2
- Date: Sat, 8 Jul 2023 13:28:40 GMT
- Title: Graph Enabled Cross-Domain Knowledge Transfer
- Authors: Shibo Yao
- Abstract summary: Cross-Domain Knowledge Transfer is an approach to mitigate the gap between good representation learning and the scarce knowledge in the domain of interest.
From the machine learning perspective, the paradigm of semi-supervised learning takes advantage of large amount of data without ground truth and achieves impressive learning performance improvement.
- Score: 1.52292571922932
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: To leverage machine learning in any decision-making process, one must convert
the given knowledge (for example, natural language, unstructured text) into
representation vectors that can be understood and processed by machine learning
model in their compatible language and data format. The frequently encountered
difficulty is, however, the given knowledge is not rich or reliable enough in
the first place. In such cases, one seeks to fuse side information from a
separate domain to mitigate the gap between good representation learning and
the scarce knowledge in the domain of interest. This approach is named
Cross-Domain Knowledge Transfer. It is crucial to study the problem because of
the commonality of scarce knowledge in many scenarios, from online healthcare
platform analyses to financial market risk quantification, leaving an obstacle
in front of us benefiting from automated decision making. From the machine
learning perspective, the paradigm of semi-supervised learning takes advantage
of large amount of data without ground truth and achieves impressive learning
performance improvement. It is adopted in this dissertation for cross-domain
knowledge transfer. (to be continued)
Related papers
- Knowledge Transfer for Cross-Domain Reinforcement Learning: A Systematic Review [2.94944680995069]
Reinforcement Learning (RL) provides a framework in which agents can be trained, via trial and error, to solve complex decision-making problems.
By reusing knowledge from a different task, knowledge transfer methods present an alternative to reduce the training time in RL.
This review presents a unifying analysis of methods focused on transferring knowledge across different domains.
arXiv Detail & Related papers (2024-04-26T20:36:58Z) - Private Knowledge Sharing in Distributed Learning: A Survey [50.51431815732716]
The rise of Artificial Intelligence has revolutionized numerous industries and transformed the way society operates.
It is crucial to utilize information in learning processes that are either distributed or owned by different entities.
Modern data-driven services have been developed to integrate distributed knowledge entities into their outcomes.
arXiv Detail & Related papers (2024-02-08T07:18:23Z) - Pathway toward prior knowledge-integrated machine learning in
engineering [1.3091722164946331]
This study emphasizes efforts to integrate multidisciplinary domain professions into machine acknowledgeable, data-driven processes.
This approach balances holist and reductionist perspectives in the engineering domain.
arXiv Detail & Related papers (2023-07-10T13:06:55Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Informed Learning by Wide Neural Networks: Convergence, Generalization
and Sampling Complexity [27.84415856657607]
We study how and why domain knowledge benefits the performance of informed learning.
We propose a generalized informed training objective to better exploit the benefits of knowledge and balance the label and knowledge imperfectness.
arXiv Detail & Related papers (2022-07-02T06:28:25Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - A Quantitative Perspective on Values of Domain Knowledge for Machine
Learning [27.84415856657607]
Domain knowledge in various forms has been playing a crucial role in improving the learning performance.
We study the problem of quantifying the values of domain knowledge in terms of its contribution to the learning performance.
arXiv Detail & Related papers (2020-11-17T06:12:23Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z) - Domain Adaption for Knowledge Tracing [65.86619804954283]
We propose a novel adaptable framework, namely knowledge tracing (AKT) to address the DAKT problem.
For the first aspect, we incorporate the educational characteristics (e.g., slip, guess, question texts) based on the deep knowledge tracing (DKT) to obtain a good performed knowledge tracing model.
For the second aspect, we propose and adopt three domain adaptation processes. First, we pre-train an auto-encoder to select useful source instances for target model training.
arXiv Detail & Related papers (2020-01-14T15:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.