Improving Deep Learning Models via Constraint-Based Domain Knowledge: a
Brief Survey
- URL: http://arxiv.org/abs/2005.10691v1
- Date: Tue, 19 May 2020 15:34:09 GMT
- Title: Improving Deep Learning Models via Constraint-Based Domain Knowledge: a
Brief Survey
- Authors: Andrea Borghesi, Federico Baldo, Michela Milano
- Abstract summary: This paper presents a first survey of the approaches devised to integrate domain knowledge, expressed in the form of constraints, in Deep Learning (DL) learning models.
We identify five categories that encompass the main approaches to inject domain knowledge: 1) acting on the features space, 2) modifications to the hypothesis space, 3) data augmentation, 4) regularization schemes, 5) constrained learning.
- Score: 11.034875974800487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning (DL) models proved themselves to perform extremely well on a
wide variety of learning tasks, as they can learn useful patterns from large
data sets. However, purely data-driven models might struggle when very
difficult functions need to be learned or when there is not enough available
training data. Fortunately, in many domains prior information can be retrieved
and used to boost the performance of DL models. This paper presents a first
survey of the approaches devised to integrate domain knowledge, expressed in
the form of constraints, in DL learning models to improve their performance, in
particular targeting deep neural networks. We identify five (non-mutually
exclusive) categories that encompass the main approaches to inject domain
knowledge: 1) acting on the features space, 2) modifications to the hypothesis
space, 3) data augmentation, 4) regularization schemes, 5) constrained
learning.
Related papers
- Accelerating Deep Learning with Fixed Time Budget [2.190627491782159]
This paper proposes an effective technique for training arbitrary deep learning models within fixed time constraints.
The proposed method is extensively evaluated in both classification and regression tasks in computer vision.
arXiv Detail & Related papers (2024-10-03T21:18:04Z) - A More Practical Approach to Machine Unlearning [0.0]
Machine unlearning is the ability to remove the influence of specific data points from a trained model.
The embedding layer in GPT-2 is crucial for effective unlearning.
Fuzzy matching techniques shift the model to a new optimum, while iterative unlearning provides a more complete modality.
arXiv Detail & Related papers (2024-06-13T17:59:06Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Maximizing Model Generalization for Machine Condition Monitoring with
Self-Supervised Learning and Federated Learning [4.214064911004321]
Deep Learning can diagnose faults and assess machine health from raw condition monitoring data without manually designed statistical features.
Traditional supervised learning may struggle to learn compact, discriminative representations that generalize to unseen target domains.
This study proposes focusing on maximizing the feature generality on the source domain and applying TL via weight transfer to copy the model to the target domain.
arXiv Detail & Related papers (2023-04-27T17:57:54Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Data-Free Adversarial Knowledge Distillation for Graph Neural Networks [62.71646916191515]
We propose the first end-to-end framework for data-free adversarial knowledge distillation on graph structured data (DFAD-GNN)
To be specific, our DFAD-GNN employs a generative adversarial network, which mainly consists of three components: a pre-trained teacher model and a student model are regarded as two discriminators, and a generator is utilized for deriving training graphs to distill knowledge from the teacher model into the student model.
Our DFAD-GNN significantly surpasses state-of-the-art data-free baselines in the graph classification task.
arXiv Detail & Related papers (2022-05-08T08:19:40Z) - Leveraging Intrinsic Gradient Information for Machine Learning Model
Training [4.682734815593623]
derivatives of target variables with respect to inputs can be leveraged to improve the accuracy of differentiable machine learning models.
Four key ideas are explored: (1) Improving the predictive accuracy of linear regression models and feed-forward neural networks (NNs); (2) Using the difference between the performance of feedforward NNs trained with and without gradient information to tune NN complexity; and (4) Using gradient information to improve generative image models.
arXiv Detail & Related papers (2021-11-30T20:50:45Z) - Injective Domain Knowledge in Neural Networks for Transprecision
Computing [17.300144121921882]
This paper studies the improvements that can be obtained by integrating prior knowledge when dealing with a non-trivial learning task.
The results clearly show that ML models exploiting problem-specific information outperform the purely data-driven ones, with an average accuracy improvement around 38%.
arXiv Detail & Related papers (2020-02-24T12:58:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.