Order-theoretic models for decision-making: Learning, optimization, complexity and computation
- URL: http://arxiv.org/abs/2406.10730v1
- Date: Sat, 15 Jun 2024 20:20:43 GMT
- Title: Order-theoretic models for decision-making: Learning, optimization, complexity and computation
- Authors: Pedro Hack,
- Abstract summary: The study of intelligent systems explains behaviour in terms of economic rationality.
The first aim of this thesis is to clarify the applicability of these results in the study of intelligent systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The study of intelligent systems explains behaviour in terms of economic rationality. This results in an optimization principle involving a function or utility, which states that the system will evolve until the configuration of maximum utility is achieved. Recently, this theory has incorporated constraints, i.e., the optimum is achieved when the utility is maximized while respecting some information-processing constraints. This is reminiscent of thermodynamic systems. As such, the study of intelligent systems has benefited from the tools of thermodynamics. The first aim of this thesis is to clarify the applicability of these results in the study of intelligent systems. We can think of the local transition steps in thermodynamic or intelligent systems as being driven by uncertainty. In fact, the transitions in both systems can be described in terms of majorization. Hence, real-valued uncertainty measures like Shannon entropy are simply a proxy for their more involved behaviour. More in general, real-valued functions are fundamental to study optimization and complexity in the order-theoretic approach to several topics, including economics, thermodynamics, and quantum mechanics. The second aim of this thesis is to improve on this classification. The basic similarity between thermodynamic and intelligent systems is based on an uncertainty notion expressed by a preorder. We can also think of the transitions in the steps of a computational process as a decision-making procedure. In fact, by adding some requirements on the considered order structures, we can build an abstract model of uncertainty reduction that allows to incorporate computability, that is, to distinguish the objects that can be constructed by following a finite set of instructions from those that cannot. The third aim of this thesis is to clarify the requirements on the order structure that allow such a framework.
Related papers
- Finite-Time Processes In Quantum Thermodynamics: The Limits Of Irreversibility [0.0]
The emergence of irreversibility in physical processes, despite the reversible nature of quantum mechanics, remains an open question in physics.
This thesis explores the intricate relationship between quantum mechanics and thermodynamics.
We tackle the challenge of deriving irreversible thermodynamic behavior from the reversible microscopic framework of quantum mechanics.
arXiv Detail & Related papers (2024-10-24T16:48:24Z) - A Systems-Theoretical Formalization of Closed Systems [47.99822253865054]
There is a lack of formalism for some key foundational concepts in systems engineering.
One of the most recently acknowledged deficits is the inadequacy of systems engineering practices for intelligent systems.
arXiv Detail & Related papers (2023-11-16T19:01:01Z) - Optimal probabilistic quantum control theory [0.0]
This work proposes a novel control framework that considers the representation of the system quantum states.
It uses the Shannon relative entropy from information theory to design optimal randomised controllers.
arXiv Detail & Related papers (2022-09-30T06:31:50Z) - Agnostic Physics-Driven Deep Learning [82.89993762912795]
This work establishes that a physical system can perform statistical gradient learning without gradient computations.
In Aeqprop, the specifics of the system do not have to be known: the procedure is based on external manipulations.
Aeqprop also establishes that in natural (bio)physical systems, genuine gradient-based statistical learning may result from generic, relatively simple mechanisms.
arXiv Detail & Related papers (2022-05-30T12:02:53Z) - A Schmidt decomposition approach to quantum thermodynamics [0.0]
We propose a novel approach to describe the thermodynamics of arbitrary bipartite autonomous quantum systems.
This formalism provides a simple, exact and symmetrical framework for expressing the energetics between interacting systems.
arXiv Detail & Related papers (2022-05-13T22:38:56Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - Driving rapidly while remaining in control: classical shortcuts from
Hamiltonian to stochastic dynamics [0.0]
thermodynamics lays down a broad framework to revisit the venerable concepts of heat, work and entropy production.
We review the different strategies that have been developed to realize finite-time state-to-state in both over and underdamped regimes.
arXiv Detail & Related papers (2022-04-23T16:37:38Z) - Symmetry Group Equivariant Architectures for Physics [52.784926970374556]
In the domain of machine learning, an awareness of symmetries has driven impressive performance breakthroughs.
We argue that both the physics community and the broader machine learning community have much to understand.
arXiv Detail & Related papers (2022-03-11T18:27:04Z) - Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness [92.26039686430204]
We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
arXiv Detail & Related papers (2022-02-17T18:44:41Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z) - Maximal Algorithmic Caliber and Algorithmic Causal Network Inference:
General Principles of Real-World General Intelligence? [0.0]
Ideas and formalisms from far-from-equilibrium thermodynamics are ported to the context of computational processes.
A Principle of Maximumic Caliber is proposed, providing guidance as to what computational processes one should hypothesize.
arXiv Detail & Related papers (2020-05-10T06:14:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.