Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models
- URL: http://arxiv.org/abs/2107.03146v2
- Date: Thu, 8 Jul 2021 08:16:48 GMT
- Title: Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models
- Authors: Alexander Hvatov, Mikhail Maslyaev, Iana S. Polonskaya, Mikhail
Sarafanov, Mark Merezhnikov, Nikolay O. Nikitin
- Abstract summary: In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
- Score: 55.41644538483948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In modern data science, it is often not enough to obtain only a data-driven
model with a good prediction quality. On the contrary, it is more interesting
to understand the properties of the model, which parts could be replaced to
obtain better results. Such questions are unified under machine learning
interpretability questions, which could be considered one of the area's raising
topics. In the paper, we use multi-objective evolutionary optimization for
composite data-driven model learning to obtain the algorithm's desired
properties. It means that whereas one of the apparent objectives is precision,
the other could be chosen as the complexity of the model, robustness, and many
others. The method application is shown on examples of multi-objective learning
of composite models, differential equations, and closed-form algebraic
expressions are unified and form approach for model-agnostic learning of the
interpretable models.
Related papers
- Partial-Multivariate Model for Forecasting [28.120094495344453]
We propose a Transformer-based partial-multivariate model, PMformer, for forecasting problems.
We demonstrate that PMformer outperforms various univariate and complete-multivariate models.
We also highlight other advantages of PMformer: efficiency and robustness under missing features.
arXiv Detail & Related papers (2024-08-19T05:18:50Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Model-agnostic interpretation by visualization of feature perturbations [0.0]
We propose a model-agnostic interpretation approach that uses visualization of feature perturbations induced by the particle swarm optimization algorithm.
We validate our approach both qualitatively and quantitatively on publicly available datasets.
arXiv Detail & Related papers (2021-01-26T00:53:29Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z) - Learning Predictive Representations for Deformable Objects Using
Contrastive Estimation [83.16948429592621]
We propose a new learning framework that jointly optimize both the visual representation model and the dynamics model.
We show substantial improvements over standard model-based learning techniques across our rope and cloth manipulation suite.
arXiv Detail & Related papers (2020-03-11T17:55:15Z) - Predicting Multidimensional Data via Tensor Learning [0.0]
We develop a model that retains the intrinsic multidimensional structure of the dataset.
To estimate the model parameters, an Alternating Least Squares algorithm is developed.
The proposed model is able to outperform benchmark models present in the forecasting literature.
arXiv Detail & Related papers (2020-02-11T11:57:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.