Optimization of Actuarial Neural Networks with Response Surface Methodology
- URL: http://arxiv.org/abs/2410.12824v1
- Date: Tue, 01 Oct 2024 15:45:41 GMT
- Title: Optimization of Actuarial Neural Networks with Response Surface Methodology
- Authors: Belguutei Ariuntugs, Kehelwala Dewage Gayan Madurang,
- Abstract summary: This study utilizes a factorial design and response surface methodology (RSM) to optimize CANN performance.
By dropping statistically insignificant hyper parameters, we reduced runs from 288 to 188, with negligible loss in accuracy, achieving near-optimal out-of-sample Poisson deviance loss.
- Score: 0.0
- License:
- Abstract: In the data-driven world of actuarial science, machine learning (ML) plays a crucial role in predictive modeling, enhancing risk assessment and pricing strategies. Neural networks, specifically combined actuarial neural networks (CANN), are vital for tasks such as mortality forecasting and pricing. However, optimizing hyperparameters (e.g., learning rates, layers) is essential for resource efficiency. This study utilizes a factorial design and response surface methodology (RSM) to optimize CANN performance. RSM effectively explores the hyperparameter space and captures potential curvature, outperforming traditional grid search. Our results show accurate performance predictions, identifying critical hyperparameters. By dropping statistically insignificant hyperparameters, we reduced runs from 288 to 188, with negligible loss in accuracy, achieving near-optimal out-of-sample Poisson deviance loss.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Efficient Hyperparameter Importance Assessment for CNNs [1.7778609937758323]
This paper aims to quantify the importance weights of some hyperparameters in Convolutional Neural Networks (CNNs) with an algorithm called N-RReliefF.
We conduct an extensive study by training over ten thousand CNN models across ten popular image classification datasets.
arXiv Detail & Related papers (2024-10-11T15:47:46Z) - Advancing Financial Risk Prediction Through Optimized LSTM Model Performance and Comparative Analysis [12.575399233846092]
This paper focuses on the application and optimization of LSTM model in financial risk prediction.
The optimized LSTM model shows significant advantages in AUC index compared with random forest, BP neural network and XGBoost.
arXiv Detail & Related papers (2024-05-31T03:31:17Z) - Robust Neural Pruning with Gradient Sampling Optimization for Residual Neural Networks [0.0]
This research embarks on pioneering the integration of gradient sampling optimization techniques, particularly StochGradAdam, into the pruning process of neural networks.
Our main objective is to address the significant challenge of maintaining accuracy in pruned neural models, critical in resource-constrained scenarios.
arXiv Detail & Related papers (2023-12-26T12:19:22Z) - Fast Exploration of the Impact of Precision Reduction on Spiking Neural
Networks [63.614519238823206]
Spiking Neural Networks (SNNs) are a practical choice when the target hardware reaches the edge of computing.
We employ an Interval Arithmetic (IA) model to develop an exploration methodology that takes advantage of the capability of such a model to propagate the approximation error.
arXiv Detail & Related papers (2022-11-22T15:08:05Z) - FasterPose: A Faster Simple Baseline for Human Pose Estimation [65.8413964785972]
We propose a design paradigm for cost-effective network with LR representation for efficient pose estimation, named FasterPose.
We study the training behavior of FasterPose, and formulate a novel regressive cross-entropy (RCE) loss function for accelerating the convergence.
Compared with the previously dominant network of pose estimation, our method reduces 58% of the FLOPs and simultaneously gains 1.3% improvement of accuracy.
arXiv Detail & Related papers (2021-07-07T13:39:08Z) - Genetically Optimized Prediction of Remaining Useful Life [4.115847582689283]
We implement LSTM and GRU models and compare the obtained results with a proposed genetically trained neural network.
We hope to improve the consistency of the predictions by adding another layer of optimization using Genetic Algorithms.
These models and the proposed architecture are tested on the NASA Turbofan Jet Engine dataset.
arXiv Detail & Related papers (2021-02-17T16:09:23Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - Optimization-driven Machine Learning for Intelligent Reflecting Surfaces
Assisted Wireless Networks [82.33619654835348]
Intelligent surface (IRS) has been employed to reshape the wireless channels by controlling individual scattering elements' phase shifts.
Due to the large size of scattering elements, the passive beamforming is typically challenged by the high computational complexity.
In this article, we focus on machine learning (ML) approaches for performance in IRS-assisted wireless networks.
arXiv Detail & Related papers (2020-08-29T08:39:43Z) - An Asymptotically Optimal Multi-Armed Bandit Algorithm and
Hyperparameter Optimization [48.5614138038673]
We propose an efficient and robust bandit-based algorithm called Sub-Sampling (SS) in the scenario of hyper parameter search evaluation.
We also develop a novel hyper parameter optimization algorithm called BOSS.
Empirical studies validate our theoretical arguments of SS and demonstrate the superior performance of BOSS on a number of applications.
arXiv Detail & Related papers (2020-07-11T03:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.