Tuning structure learning algorithms with out-of-sample and resampling
strategies
- URL: http://arxiv.org/abs/2306.13932v1
- Date: Sat, 24 Jun 2023 10:39:44 GMT
- Title: Tuning structure learning algorithms with out-of-sample and resampling
strategies
- Authors: Kiattikun Chobtham, Anthony C. Constantinou
- Abstract summary: Out-of-sample tuning for structure learning employs out-of-sample and resampling strategies to estimate the optimal hyperparameter configuration for structure learning.
We show that employing OTSL leads to improvements in graphical accuracy compared to the state-of-the-art.
- Score: 6.85316573653194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the challenges practitioners face when applying structure learning
algorithms to their data involves determining a set of hyperparameters;
otherwise, a set of hyperparameter defaults is assumed. The optimal
hyperparameter configuration often depends on multiple factors, including the
size and density of the usually unknown underlying true graph, the sample size
of the input data, and the structure learning algorithm. We propose a novel
hyperparameter tuning method, called the Out-of-sample Tuning for Structure
Learning (OTSL), that employs out-of-sample and resampling strategies to
estimate the optimal hyperparameter configuration for structure learning, given
the input data set and structure learning algorithm. Synthetic experiments show
that employing OTSL as a means to tune the hyperparameters of hybrid and
score-based structure learning algorithms leads to improvements in graphical
accuracy compared to the state-of-the-art. We also illustrate the applicability
of this approach to real datasets from different disciplines.
Related papers
- Benchmark on Drug Target Interaction Modeling from a Structure Perspective [48.60648369785105]
Drug-target interaction prediction is crucial to drug discovery and design.
Recent methods, such as those based on graph neural networks (GNNs) and Transformers, demonstrate exceptional performance across various datasets.
We conduct a comprehensive survey and benchmark for drug-target interaction modeling from a structure perspective, via integrating tens of explicit (i.e., GNN-based) and implicit (i.e., Transformer-based) structure learning algorithms.
arXiv Detail & Related papers (2024-07-04T16:56:59Z) - A Structural-Clustering Based Active Learning for Graph Neural Networks [16.85038790429607]
We propose the Structural-Clustering PageRank method for improved Active learning (SPA) specifically designed for graph-structured data.
SPA integrates community detection using the SCAN algorithm with the PageRank scoring method for efficient and informative sample selection.
arXiv Detail & Related papers (2023-12-07T14:04:38Z) - Robustness of Algorithms for Causal Structure Learning to Hyperparameter
Choice [2.3020018305241337]
Hyper parameter tuning can make the difference between state-of-the-art and poor prediction performance for any algorithm.
We investigate the influence of hyper parameter selection on causal structure learning tasks.
arXiv Detail & Related papers (2023-10-27T15:34:08Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - Automatic tuning of hyper-parameters of reinforcement learning
algorithms using Bayesian optimization with behavioral cloning [0.0]
In reinforcement learning (RL), the information content of data gathered by the learning agent is dependent on the setting of many hyper- parameters.
In this work, a novel approach for autonomous hyper- parameter setting using Bayesian optimization is proposed.
Experiments reveal promising results compared to other manual tweaking and optimization-based approaches.
arXiv Detail & Related papers (2021-12-15T13:10:44Z) - Experimental Investigation and Evaluation of Model-based Hyperparameter
Optimization [0.3058685580689604]
This article presents an overview of theoretical and practical results for popular machine learning algorithms.
The R package mlr is used as a uniform interface to the machine learning models.
arXiv Detail & Related papers (2021-07-19T11:37:37Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z) - TFPnP: Tuning-free Plug-and-Play Proximal Algorithm with Applications to
Inverse Imaging Problems [22.239477171296056]
Plug-and-Play (MM) is a non- optimization framework that combines numerical algorithms, for example, with advanced denoising priors.
We discuss several practical considerations of more denoisers, which together with our learned strategies are state-of-the-art results.
arXiv Detail & Related papers (2020-11-18T14:19:30Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.