Empirical Computation
- URL: http://arxiv.org/abs/2503.10954v1
- Date: Thu, 13 Mar 2025 23:40:42 GMT
- Title: Empirical Computation
- Authors: Eric Tang, Marcel Böhme,
- Abstract summary: We call this approach as *empirical computation* and observe that its capabilities and limits cannot be understood within the classic, rationalist framework of computation.<n>Our purpose is to establish empirical computation as a field in SE that is timely and rich with interesting problems.
- Score: 14.14554547058963
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this vision paper, we explore the challenges and opportunities of a form of computation that employs an empirical (rather than a formal) approach, where the solution of a computational problem is returned as empirically most likely (rather than necessarily correct). We call this approach as *empirical computation* and observe that its capabilities and limits *cannot* be understood within the classic, rationalist framework of computation. While we take a very broad view of "computational problem", a classic, well-studied example is *sorting*: Given a set of $n$ numbers, return these numbers sorted in ascending order. * To run a classical, *formal computation*, we might first think about a *specific algorithm* (e.g., merge sort) before developing a *specific* program that implements it. The program will expect the input to be given in a *specific* format, type, or data structure (e.g., unsigned 32-bit integers). In software engineering, we have many approaches to analyze the correctness of such programs. From complexity theory, we know that there exists no correct program that can solve the average instance of the sorting problem faster than $O(n\log n)$. * To run an *empirical computation*, we might directly ask a large language model (LLM) to solve *any* computational problem (which can be stated informally in natural language) and provide the input in *any* format (e.g., negative numbers written as Chinese characters). There is no (problem-specific) program that could be analyzed for correctness. Also, the time it takes an LLM to return an answer is entirely *independent* of the computational complexity of the problem that is solved. What are the capabilities or limits of empirical computation in the general, in the problem-, or in the instance-specific? Our purpose is to establish empirical computation as a field in SE that is timely and rich with interesting problems.
Related papers
- Simple and Provable Scaling Laws for the Test-Time Compute of Large Language Models [70.07661254213181]
We propose two principled algorithms for the test-time compute of large language models.<n>We prove theoretically that the failure probability of one algorithm decays to zero exponentially as its test-time compute grows.
arXiv Detail & Related papers (2024-11-29T05:29:47Z) - Sum-of-Squares inspired Quantum Metaheuristic for Polynomial Optimization with the Hadamard Test and Approximate Amplitude Constraints [76.53316706600717]
Recently proposed quantum algorithm arXiv:2206.14999 is based on semidefinite programming (SDP)
We generalize the SDP-inspired quantum algorithm to sum-of-squares.
Our results show that our algorithm is suitable for large problems and approximate the best known classicals.
arXiv Detail & Related papers (2024-08-14T19:04:13Z) - On Hardware-efficient Inference in Probabilistic Circuits [5.335146727090435]
This work proposes the first dedicated approximate computing framework for PCs.
We leverage Addition As Int, resulting in linear PC computation with simple hardware elements.
We provide a theoretical approximation error analysis and present an error compensation mechanism.
arXiv Detail & Related papers (2024-05-22T13:38:47Z) - When can you trust feature selection? -- I: A condition-based analysis
of LASSO and generalised hardness of approximation [49.1574468325115]
We show how no (randomised) algorithm can determine the correct support sets (with probability $> 1/2$) of minimisers of LASSO when reading approximate input.
For ill-posed inputs, the algorithm runs forever, hence, it will never produce a wrong answer.
For any algorithm defined on an open set containing a point with infinite condition number, there is an input for which the algorithm will either run forever or produce a wrong answer.
arXiv Detail & Related papers (2023-12-18T18:29:01Z) - Exceeding Computational Complexity Trial-and-Error Dynamic Action and
Intelligence [0.0]
Computational complexity is a core theory of computer science, which dictates the degree of difficulty of computation.
In this paper, we try to clarify concepts, and we propose definitions such as unparticularized computing, particularized computing, computing agents, and dynamic search.
We also propose and discuss a framework, i.e., trial-and-error + dynamic search.
arXiv Detail & Related papers (2022-12-22T21:23:27Z) - Solving a Special Type of Optimal Transport Problem by a Modified
Hungarian Algorithm [2.1485350418225244]
We study a special type of transport (OT) problem and propose a modified Hungarian algorithm to solve it exactly.
For an OT problem between marginals with $m$ and $n$ atoms, the computational complexity of the proposed algorithm is $O(m2n)$.
arXiv Detail & Related papers (2022-10-29T16:28:46Z) - Weighted Programming [0.0]
We study weighted programming, a programming paradigm for specifying mathematical models.
We argue that weighted programming as a paradigm can be used to specify mathematical models beyond probability distributions.
arXiv Detail & Related papers (2022-02-15T17:06:43Z) - On Theoretical Complexity and Boolean Satisfiability [0.0]
This thesis introduces some of the most central concepts in the Theory of Computing.
We then explore some of its tractable as well as intractable variants such as Horn-SAT and 3-SAT.
Finally, we establish reductions from 3-SAT to some of the famous NP-complete graph problems.
arXiv Detail & Related papers (2021-12-22T10:13:34Z) - Searching for More Efficient Dynamic Programs [61.79535031840558]
We describe a set of program transformations, a simple metric for assessing the efficiency of a transformed program, and a search procedure to improve this metric.
We show that in practice, automated search can find substantial improvements to the initial program.
arXiv Detail & Related papers (2021-09-14T20:52:55Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Sparsified Linear Programming for Zero-Sum Equilibrium Finding [89.30539368124025]
We present a totally different approach to the problem, which is competitive and often orders of magnitude better than the prior state of the art.
With experiments on poker endgames, we demonstrate, for the first time, that modern linear program solvers are competitive against even game-specific modern variants of CFR.
arXiv Detail & Related papers (2020-06-05T13:48:26Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.