Complexity Measures for Multi-objective Symbolic Regression
- URL: http://arxiv.org/abs/2109.00238v1
- Date: Wed, 1 Sep 2021 08:22:41 GMT
- Title: Complexity Measures for Multi-objective Symbolic Regression
- Authors: Michael Kommenda, Andreas Beham, Michael Affenzeller, Gabriel
Kronberger
- Abstract summary: Multi-objective symbolic regression has the advantage that while the accuracy of the learned models is maximized, the complexity is automatically adapted.
We study which complexity measures are most appropriately used in symbolic regression when performing multi- objective optimization with NSGA-II.
- Score: 2.4087148947930634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-objective symbolic regression has the advantage that while the accuracy
of the learned models is maximized, the complexity is automatically adapted and
need not be specified a-priori. The result of the optimization is not a single
solution anymore, but a whole Pareto-front describing the trade-off between
accuracy and complexity. In this contribution we study which complexity
measures are most appropriately used in symbolic regression when performing
multi- objective optimization with NSGA-II. Furthermore, we present a novel
complexity measure that includes semantic information based on the function
symbols occurring in the models and test its effects on several benchmark
datasets. Results comparing multiple complexity measures are presented in terms
of the achieved accuracy and model length to illustrate how the search
direction of the algorithm is affected.
Related papers
- GLCM-Based Feature Combination for Extraction Model Optimization in Object Detection Using Machine Learning [0.0]
This research aims to enhance computational efficiency by selecting appropriate features within the GLCM framework.
Two classification models, namely K-Nearest Neighbours (K-NN) and Support Vector Machine (SVM), were employed.
The results indicate that K-NN outperforms SVM in terms of computational complexity.
arXiv Detail & Related papers (2024-04-06T10:16:33Z) - RGM: A Robust Generalizable Matching Model [49.60975442871967]
We propose a deep model for sparse and dense matching, termed RGM (Robust Generalist Matching)
To narrow the gap between synthetic training samples and real-world scenarios, we build a new, large-scale dataset with sparse correspondence ground truth.
We are able to mix up various dense and sparse matching datasets, significantly improving the training diversity.
arXiv Detail & Related papers (2023-10-18T07:30:08Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Reinforcement Learning for Adaptive Mesh Refinement [63.7867809197671]
We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning to train refinement policies directly from simulation.
The model sizes of these policy architectures are independent of the mesh size and hence scale to arbitrarily large and complex simulations.
arXiv Detail & Related papers (2021-03-01T22:55:48Z) - Surrogate Models for Optimization of Dynamical Systems [0.0]
This paper provides a smart data driven mechanism to construct low dimensional surrogate models.
These surrogate models reduce the computational time for solution of the complex optimization problems.
arXiv Detail & Related papers (2021-01-22T14:09:30Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Efficient and Sparse Neural Networks by Pruning Weights in a
Multiobjective Learning Approach [0.0]
We propose a multiobjective perspective on the training of neural networks by treating its prediction accuracy and the network complexity as two individual objective functions.
Preliminary numerical results on exemplary convolutional neural networks confirm that large reductions in the complexity of neural networks with neglibile loss of accuracy are possible.
arXiv Detail & Related papers (2020-08-31T13:28:03Z) - NeuMiss networks: differentiable programming for supervised learning
with missing values [0.0]
We derive the analytical form of the optimal predictor under a linearity assumption.
We propose a new principled architecture, named NeuMiss networks.
They have good predictive accuracy with both a number of parameters and a computational complexity independent of the number of missing data patterns.
arXiv Detail & Related papers (2020-07-03T11:42:25Z) - Efficient Characterization of Dynamic Response Variation Using
Multi-Fidelity Data Fusion through Composite Neural Network [9.446974144044733]
We take advantage of the multi-level response prediction opportunity in structural dynamic analysis.
We formulate a composite neural network fusion approach that can fully utilize the multi-level, heterogeneous datasets obtained.
arXiv Detail & Related papers (2020-05-07T02:44:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.