Injective Domain Knowledge in Neural Networks for Transprecision
Computing
- URL: http://arxiv.org/abs/2002.10214v1
- Date: Mon, 24 Feb 2020 12:58:56 GMT
- Title: Injective Domain Knowledge in Neural Networks for Transprecision
Computing
- Authors: Andrea Borghesi, Federico Baldo, Michele Lombardi, Michela Milano
- Abstract summary: This paper studies the improvements that can be obtained by integrating prior knowledge when dealing with a non-trivial learning task.
The results clearly show that ML models exploiting problem-specific information outperform the purely data-driven ones, with an average accuracy improvement around 38%.
- Score: 17.300144121921882
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning (ML) models are very effective in many learning tasks, due
to the capability to extract meaningful information from large data sets.
Nevertheless, there are learning problems that cannot be easily solved relying
on pure data, e.g. scarce data or very complex functions to be approximated.
Fortunately, in many contexts domain knowledge is explicitly available and can
be used to train better ML models. This paper studies the improvements that can
be obtained by integrating prior knowledge when dealing with a non-trivial
learning task, namely precision tuning of transprecision computing
applications. The domain information is injected in the ML models in different
ways: I) additional features, II) ad-hoc graph-based network topology, III)
regularization schemes. The results clearly show that ML models exploiting
problem-specific information outperform the purely data-driven ones, with an
average accuracy improvement around 38%.
Related papers
- What Do Learning Dynamics Reveal About Generalization in LLM Reasoning? [83.83230167222852]
We find that a model's generalization behavior can be effectively characterized by a training metric we call pre-memorization train accuracy.
By connecting a model's learning behavior to its generalization, pre-memorization train accuracy can guide targeted improvements to training strategies.
arXiv Detail & Related papers (2024-11-12T09:52:40Z) - How to unlearn a learned Machine Learning model ? [0.0]
I will present an elegant algorithm for unlearning a machine learning model and visualize its abilities.
I will elucidate the underlying mathematical theory and establish specific metrics to evaluate both the unlearned model's performance on desired data and its level of ignorance regarding unwanted data.
arXiv Detail & Related papers (2024-10-13T17:38:09Z) - M$^3$-Impute: Mask-guided Representation Learning for Missing Value Imputation [12.174699459648842]
M$3$-Impute aims to explicitly leverage the missingness information and such correlations with novel masking schemes.
Experiment results show the effectiveness of M$3$-Impute by achieving 20 best and 4 second-best MAE scores on average.
arXiv Detail & Related papers (2024-10-11T13:25:32Z) - Generative Adversarial Networks for Imputing Sparse Learning Performance [3.0350058108125646]
This paper proposes using the Generative Adversarial Imputation Networks (GAIN) framework to impute sparse learning performance data.
Our customized GAIN-based method computational process imputes sparse data in a 3D tensor space.
This finding enhances comprehensive learning data modeling and analytics in AI-based education.
arXiv Detail & Related papers (2024-07-26T17:09:48Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Leveraging Intrinsic Gradient Information for Machine Learning Model
Training [4.682734815593623]
derivatives of target variables with respect to inputs can be leveraged to improve the accuracy of differentiable machine learning models.
Four key ideas are explored: (1) Improving the predictive accuracy of linear regression models and feed-forward neural networks (NNs); (2) Using the difference between the performance of feedforward NNs trained with and without gradient information to tune NN complexity; and (4) Using gradient information to improve generative image models.
arXiv Detail & Related papers (2021-11-30T20:50:45Z) - Complementary Ensemble Learning [1.90365714903665]
We derive a technique to improve performance of state-of-the-art deep learning models.
Specifically, we train auxiliary models which are able to complement state-of-the-art model uncertainty.
arXiv Detail & Related papers (2021-11-09T03:23:05Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.