Optimisation Is Not What You Need
- URL: http://arxiv.org/abs/2507.03045v1
- Date: Thu, 03 Jul 2025 08:50:20 GMT
- Title: Optimisation Is Not What You Need
- Authors: Alfredo Ibias,
- Abstract summary: We show that optimisation methods share some fundamental flaws that impede them to become a true artificial cognition.<n>Specifically, the field have identified catastrophic forgetting as a fundamental problem to develop such cognition.<n>This paper formally proves that this problem is inherent to optimisation methods, and as such it will always limit approaches that try to solve the Artificial General Intelligence problem as an optimisation problem.
- Score: 4.13365552362244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Artificial Intelligence field has focused on developing optimisation methods to solve multiple problems, specifically problems that we thought to be only solvable through cognition. The obtained results have been outstanding, being able to even surpass the Turing Test. However, we have found that these optimisation methods share some fundamental flaws that impede them to become a true artificial cognition. Specifically, the field have identified catastrophic forgetting as a fundamental problem to develop such cognition. This paper formally proves that this problem is inherent to optimisation methods, and as such it will always limit approaches that try to solve the Artificial General Intelligence problem as an optimisation problem. Additionally, it addresses the problem of overfitting and discuss about other smaller problems that optimisation methods pose. Finally, it empirically shows how world-modelling methods avoid suffering from either problem. As a conclusion, the field of Artificial Intelligence needs to look outside the machine learning field to find methods capable of developing an artificial cognition.
Related papers
- Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs [86.79757571440082]
Large language models (LLMs) such as OpenAI's o1 have demonstrated remarkable abilities in complex reasoning tasks.<n>We identify a phenomenon we term underthinking, where o1-like LLMs frequently switch between different reasoning thoughts.<n>We propose a decoding strategy with thought switching penalty TIP that discourages premature transitions between thoughts.
arXiv Detail & Related papers (2025-01-30T18:58:18Z) - Reasoning Paths Optimization: Learning to Reason and Explore From Diverse Paths [69.39559168050923]
We introduce Reasoning Paths Optimization (RPO), which enables learning to reason and explore from diverse paths.
Our approach encourages favorable branches at each reasoning step while penalizing unfavorable ones, enhancing the model's overall problem-solving performance.
We focus on multi-step reasoning tasks, such as math word problems and science-based exam questions.
arXiv Detail & Related papers (2024-10-07T06:37:25Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement [50.62461749446111]
Self-Polish (SP) is a novel method that facilitates the model's reasoning by guiding it to progressively refine the given problems to be more comprehensible and solvable.
SP is to all other prompting methods of answer/reasoning side like CoT, allowing for seamless integration with state-of-the-art techniques for further improvement.
arXiv Detail & Related papers (2023-05-23T19:58:30Z) - Computability of Optimizers [71.84486326350338]
We will show that in various situations the is unattainable on Turing machines and consequently on digital computers.
We prove such results for a variety of well-known problems from very different areas, including artificial intelligence, financial mathematics, and information theory.
arXiv Detail & Related papers (2023-01-15T17:41:41Z) - Exploring the Nuances of Designing (with/for) Artificial Intelligence [0.0]
We explore the construct of infrastructure as a means to simultaneously address algorithmic and societal issues when designing AI.
Neither algorithmic solutions, nor purely humanistic ones will be enough to fully undesirable outcomes in the narrow state of AI.
arXiv Detail & Related papers (2020-10-22T20:34:35Z) - Nature-Inspired Optimization Algorithms: Challenges and Open Problems [3.7692411550925673]
Problems in science and engineering can be formulated as optimization problems, subject to complex nonlinear constraints.
The solutions of highly nonlinear problems usually require sophisticated optimization algorithms, and traditional algorithms may struggle to deal with such problems.
A current trend is to use nature-inspired algorithms due to their flexibility and effectiveness.
arXiv Detail & Related papers (2020-03-08T13:00:04Z) - Bio-inspired Optimization: metaheuristic algorithms for optimization [0.0]
In today's day and time solving real-world complex problems has become fundamentally vital and critical task.
Traditional optimization methods are found to be effective for small scale problems.
For real-world large scale problems, traditional methods either do not scale up or fail to obtain optimal solutions or they end-up giving solutions after a long running time.
arXiv Detail & Related papers (2020-02-24T13:26:34Z) - Boldly Going Where No Prover Has Gone Before [0.0]
I argue that the most interesting goal facing researchers in automated reasoning is being able to solve problems that cannot currently be solved by existing tools and methods.
This may appear obvious, and is clearly not an original thought, but focusing on this as a primary goal allows us to examine other goals in a new light.
arXiv Detail & Related papers (2019-12-30T15:14:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.