Solving Recurrence Relations using Machine Learning, with Application to
Cost Analysis
- URL: http://arxiv.org/abs/2309.07259v1
- Date: Wed, 30 Aug 2023 08:55:36 GMT
- Title: Solving Recurrence Relations using Machine Learning, with Application to
Cost Analysis
- Authors: Maximiliano Klemen, Miguel \'A. Carreira-Perpi\~n\'an, Pedro
Lopez-Garcia
- Abstract summary: We develop a novel, general approach for solving arbitrary, constrained recurrence relations.
Our approach can find closed-form solutions, in a reasonable time, for classes of recurrences that cannot be solved by such a system.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic static cost analysis infers information about the resources used by
programs without actually running them with concrete data, and presents such
information as functions of input data sizes. Most of the analysis tools for
logic programs (and other languages) are based on setting up recurrence
relations representing (bounds on) the computational cost of predicates, and
solving them to find closed-form functions that are equivalent to (or a bound
on) them. Such recurrence solving is a bottleneck in current tools: many of the
recurrences that arise during the analysis cannot be solved with current
solvers, such as Computer Algebra Systems (CASs), so that specific methods for
different classes of recurrences need to be developed. We address such a
challenge by developing a novel, general approach for solving arbitrary,
constrained recurrence relations, that uses machine-learning sparse regression
techniques to guess a candidate closed-form function, and a combination of an
SMT-solver and a CAS to check whether such function is actually a solution of
the recurrence. We have implemented a prototype and evaluated it with
recurrences generated by a cost analysis system (the one in CiaoPP). The
experimental results are quite promising, showing that our approach can find
closed-form solutions, in a reasonable time, for classes of recurrences that
cannot be solved by such a system, nor by current CASs.
Related papers
- The Function-Representation Model of Computation [2.5069344340760713]
We propose a novel model of computation, one where memory and program are merged: the Function-Representation.
This model of computation involves defining a generic Function-Representation and instantiating multiple instances of it.
We also explore the kind of functions a Function-Representation can implement, and present different ways to organise multiple instances of a Function-Representation.
arXiv Detail & Related papers (2024-10-10T13:54:35Z) - Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - Stochastic Q-learning for Large Discrete Action Spaces [79.1700188160944]
In complex environments with discrete action spaces, effective decision-making is critical in reinforcement learning (RL)
We present value-based RL approaches which, as opposed to optimizing over the entire set of $n$ actions, only consider a variable set of actions, possibly as small as $mathcalO(log(n)$)$.
The presented value-based RL methods include, among others, Q-learning, StochDQN, StochDDQN, all of which integrate this approach for both value-function updates and action selection.
arXiv Detail & Related papers (2024-05-16T17:58:44Z) - A Machine Learning-based Approach for Solving Recurrence Relations and its use in Cost Analysis of Logic Programs [0.0]
We develop a novel, general approach for solving arbitrary, constrained recurrence relations.
Our prototype implementation and its experimental evaluation within the context of the CiaoPP system show quite promising results.
arXiv Detail & Related papers (2024-05-11T09:51:36Z) - Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - MIRACLE: Causally-Aware Imputation via Learning Missing Data Mechanisms [82.90843777097606]
We propose a causally-aware imputation algorithm (MIRACLE) for missing data.
MIRACLE iteratively refines the imputation of a baseline by simultaneously modeling the missingness generating mechanism.
We conduct extensive experiments on synthetic and a variety of publicly available datasets to show that MIRACLE is able to consistently improve imputation.
arXiv Detail & Related papers (2021-11-04T22:38:18Z) - Computational Graph Completion [0.8122270502556374]
We introduce a framework for generating, organizing, and reasoning with computational knowledge.
It is motivated by the observation that most problems in Computational Sciences and Engineering can be described as that of completing a computational graph.
arXiv Detail & Related papers (2021-10-20T00:32:06Z) - Symbolic Regression by Exhaustive Search: Reducing the Search Space
Using Syntactical Constraints and Efficient Semantic Structure Deduplication [2.055204980188575]
Symbolic regression is a powerful system identification technique in industrial scenarios where no prior knowledge on model structure is available.
In this chapter we introduce a deterministic symbolic regression algorithm specifically designed to address these issues.
A finite enumeration of all possible models is guaranteed by structural restrictions as well as a caching mechanism for detecting semantically equivalent solutions.
arXiv Detail & Related papers (2021-09-28T17:47:51Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Symbolic Regression using Mixed-Integer Nonlinear Optimization [9.638685454900047]
The Symbolic Regression (SR) problem is a hard problem in machine learning.
We propose a hybrid algorithm that combines mixed-integer nonlinear optimization with explicit enumeration.
We show that our algorithm is competitive, for some synthetic data sets, with a state-of-the-art SR software and a recent physics-inspired method called AI Feynman.
arXiv Detail & Related papers (2020-06-11T20:53:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.