An Abstract View on Optimizations in Propositional Frameworks
- URL: http://arxiv.org/abs/2206.06440v3
- Date: Mon, 20 Mar 2023 22:23:20 GMT
- Title: An Abstract View on Optimizations in Propositional Frameworks
- Authors: Yuliya Lierler
- Abstract summary: We propose a unifying framework of so-called weight systems that eliminates syntactic distinctions between paradigms.
This framework has significant simplifying and explanatory potential in the studies of optimization and modularity in automated reasoning and knowledge representation.
- Score: 0.6853165736531939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Search-optimization problems are plentiful in scientific and engineering
domains. Artificial intelligence has long contributed to the development of
search algorithms and declarative programming languages geared toward solving
and modeling search-optimization problems. Automated reasoning and knowledge
representation are the subfields of AI that are particularly vested in these
developments. Many popular automated reasoning paradigms provide users with
languages supporting optimization statements: answer set programming or MaxSAT
on minone, to name a few. These paradigms vary significantly in their languages
and in the ways they express quality conditions on computed solutions. Here we
propose a unifying framework of so-called weight systems that eliminates
syntactic distinctions between paradigms and allows us to see essential
similarities and differences between optimization statements provided by
paradigms. This unifying outlook has significant simplifying and explanatory
potential in the studies of optimization and modularity in automated reasoning
and knowledge representation. It also supplies researchers with a convenient
tool for proving the formal properties of distinct frameworks; bridging these
frameworks; and facilitating the development of translational solvers.
Related papers
- Deep Insights into Automated Optimization with Large Language Models and Evolutionary Algorithms [3.833708891059351]
Large Language Models (LLMs) and Evolutionary Algorithms (EAs) offer promising new approach to overcome limitations and make optimization more automated.
LLMs act as dynamic agents that can generate, refine, and interpret optimization strategies.
EAs efficiently explore complex solution spaces through evolutionary operators.
arXiv Detail & Related papers (2024-10-28T09:04:49Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Socio-cognitive Optimization of Time-delay Control Problems using
Evolutionary Metaheuristics [89.24951036534168]
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches.
In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model.
arXiv Detail & Related papers (2022-10-23T22:21:10Z) - A Framework for Inherently Interpretable Optimization Models [0.0]
Solution of large-scale problems that seemed intractable decades ago are now a routine task.
One major barrier is that the optimization software can be perceived as a black box.
We propose an optimization framework to derive solutions that inherently come with an easily comprehensible explanatory rule.
arXiv Detail & Related papers (2022-08-26T10:32:00Z) - Tools and Methodologies for Verifying Answer Set Programs [0.0]
ASP is a powerful declarative programming paradigm commonly used for solving challenging search and optimization problems.
As an approach to Knowledge Representation and Reasoning, ASP benefits from its simplicity, conciseness and rigorously defined semantics.
My research is concerned with extending the theory and tools supporting the verification of ASP progams.
arXiv Detail & Related papers (2022-08-05T10:50:21Z) - Unifying Framework for Optimizations in non-boolean Formalisms [0.6853165736531939]
Many popular automated reasoning paradigms provide languages supporting optimization statements.
Here we propose a unifying framework that eliminates syntactic distinctions between paradigms.
We study formal properties of the proposed systems that translate into formal properties of paradigms that can be captured within our framework.
arXiv Detail & Related papers (2022-06-16T00:38:19Z) - On the Configuration of More and Less Expressive Logic Programs [11.331373810571993]
We consider two well-known model-based AI methodologies, SAT and ASP, define a number of syntactic features that may characterise their inputs.
Results of a wide experimental analysis involving SAT and ASP domains, taken from respective competitions, show the different advantages that can be obtained by using input reformulation and configuration.
arXiv Detail & Related papers (2022-03-02T10:55:35Z) - Efficient and Modular Implicit Differentiation [68.74748174316989]
We propose a unified, efficient and modular approach for implicit differentiation of optimization problems.
We show that seemingly simple principles allow to recover many recently proposed implicit differentiation methods and create new ones easily.
arXiv Detail & Related papers (2021-05-31T17:45:58Z) - Investigating Bi-Level Optimization for Learning and Vision from a
Unified Perspective: A Survey and Beyond [114.39616146985001]
In machine learning and computer vision fields, despite the different motivations and mechanisms, a lot of complex problems contain a series of closely related subproblms.
In this paper, we first uniformly express these complex learning and vision problems from the perspective of Bi-Level Optimization (BLO)
Then we construct a value-function-based single-level reformulation and establish a unified algorithmic framework to understand and formulate mainstream gradient-based BLO methodologies.
arXiv Detail & Related papers (2021-01-27T16:20:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.