Hyperparameter optimization in deep multi-target prediction
- URL: http://arxiv.org/abs/2211.04362v1
- Date: Tue, 8 Nov 2022 16:33:36 GMT
- Title: Hyperparameter optimization in deep multi-target prediction
- Authors: Dimitrios Iliadis, Marcel Wever, Bernard De Baets, Willem Waegeman
- Abstract summary: We offer a single AutoML framework for most problem settings that fall under the umbrella of multi-target prediction.
Our work can be seen as the first attempt at offering a single AutoML framework for most problem settings that fall under the umbrella of multi-target prediction.
- Score: 16.778802088570412
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a result of the ever increasing complexity of configuring and fine-tuning
machine learning models, the field of automated machine learning (AutoML) has
emerged over the past decade. However, software implementations like Auto-WEKA
and Auto-sklearn typically focus on classical machine learning (ML) tasks such
as classification and regression. Our work can be seen as the first attempt at
offering a single AutoML framework for most problem settings that fall under
the umbrella of multi-target prediction, which includes popular ML settings
such as multi-label classification, multivariate regression, multi-task
learning, dyadic prediction, matrix completion, and zero-shot learning.
Automated problem selection and model configuration are achieved by extending
DeepMTP, a general deep learning framework for MTP problem settings, with
popular hyperparameter optimization (HPO) methods. Our extensive benchmarking
across different datasets and MTP problem settings identifies cases where
specific HPO methods outperform others.
Related papers
- Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - AutoML-GPT: Large Language Model for AutoML [5.9145212342776805]
We have established a framework called AutoML-GPT that integrates a comprehensive set of tools and libraries.
Through a conversational interface, users can specify their requirements, constraints, and evaluation metrics.
We have demonstrated that AutoML-GPT significantly reduces the time and effort required for machine learning tasks.
arXiv Detail & Related papers (2023-09-03T09:39:49Z) - The Devil is in the Errors: Leveraging Large Language Models for
Fine-grained Machine Translation Evaluation [93.01964988474755]
AutoMQM is a prompting technique which asks large language models to identify and categorize errors in translations.
We study the impact of labeled data through in-context learning and finetuning.
We then evaluate AutoMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores.
arXiv Detail & Related papers (2023-08-14T17:17:21Z) - AutoML-GPT: Automatic Machine Learning with GPT [74.30699827690596]
We propose developing task-oriented prompts and automatically utilizing large language models (LLMs) to automate the training pipeline.
We present the AutoML-GPT, which employs GPT as the bridge to diverse AI models and dynamically trains models with optimized hyper parameters.
This approach achieves remarkable results in computer vision, natural language processing, and other challenging areas.
arXiv Detail & Related papers (2023-05-04T02:09:43Z) - Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language
Transfer Learning [59.38343286807997]
We propose Model-Agnostic Multitask Fine-tuning (MAMF) for vision-language models on unseen tasks.
Compared with model-agnostic meta-learning (MAML), MAMF discards the bi-level optimization and uses only first-order gradients.
We show that MAMF consistently outperforms the classical fine-tuning method for few-shot transfer learning on five benchmark datasets.
arXiv Detail & Related papers (2022-03-09T17:26:53Z) - Mining Robust Default Configurations for Resource-constrained AutoML [18.326426020906215]
We present a novel method of selecting performant configurations for a given task by performing offline autoML and mining over a diverse set of tasks.
We show that our approach is effective for warm-starting existing autoML platforms.
arXiv Detail & Related papers (2022-02-20T23:08:04Z) - Automated problem setting selection in multi-target prediction with
AutoMTP [14.451046691298298]
AutoMTP is an automated framework that performs algorithm selection for Multi-Target Prediction.
It is realized by adopting a rule-based system for the algorithm selection step and a flexible neural network architecture.
arXiv Detail & Related papers (2021-04-19T12:44:20Z) - Robusta: Robust AutoML for Feature Selection via Reinforcement Learning [24.24652530951966]
We propose the first robust AutoML framework, Robusta--based on reinforcement learning (RL)
We show that the framework is able to improve the model robustness by up to 22% while maintaining competitive accuracy on benign samples.
arXiv Detail & Related papers (2021-01-15T03:12:29Z) - Resource-Aware Pareto-Optimal Automated Machine Learning Platform [1.6746303554275583]
novel platform Resource-Aware AutoML (RA-AutoML)
RA-AutoML enables flexible and generalized algorithms to build machine learning models subjected to multiple objectives.
arXiv Detail & Related papers (2020-10-30T19:37:48Z) - Auto-PyTorch Tabular: Multi-Fidelity MetaLearning for Efficient and
Robust AutoDL [53.40030379661183]
Auto-PyTorch is a framework to enable fully automated deep learning (AutoDL)
It combines multi-fidelity optimization with portfolio construction for warmstarting and ensembling of deep neural networks (DNNs)
We show that Auto-PyTorch performs better than several state-of-the-art competitors on average.
arXiv Detail & Related papers (2020-06-24T15:15:17Z) - AutoFIS: Automatic Feature Interaction Selection in Factorization Models
for Click-Through Rate Prediction [75.16836697734995]
We propose a two-stage algorithm called Automatic Feature Interaction Selection (AutoFIS)
AutoFIS can automatically identify important feature interactions for factorization models with computational cost just equivalent to training the target model to convergence.
AutoFIS has been deployed onto the training platform of Huawei App Store recommendation service.
arXiv Detail & Related papers (2020-03-25T06:53:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.