On limitations of learning algorithms in competitive environments
- URL: http://arxiv.org/abs/2011.12728v2
- Date: Fri, 18 Jun 2021 07:07:05 GMT
- Title: On limitations of learning algorithms in competitive environments
- Authors: Alexander Y Klimenko and Dimitri A Klimenko
- Abstract summary: We discuss conceptual limitations of generic learning algorithms pursuing adversarial goals in competitive environments.
These limitations are shown to be related to intransitivity, which is commonly present in competitive environments.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We discuss conceptual limitations of generic learning algorithms pursuing
adversarial goals in competitive environments, and prove that they are subject
to limitations that are analogous to the constraints on knowledge imposed by
the famous theorems of G\"odel and Turing. These limitations are shown to be
related to intransitivity, which is commonly present in competitive
environments.
Related papers
- Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization [53.15874572081944]
We study computability in the deep learning framework from two perspectives.
We show algorithmic limitations in training deep neural networks even in cases where the underlying problem is well-behaved.
Finally, we show that in quantized versions of classification and deep network training, computability restrictions do not arise or can be overcome to a certain degree.
arXiv Detail & Related papers (2024-08-12T15:02:26Z) - Joint Learning of Policy with Unknown Temporal Constraints for Safe
Reinforcement Learning [0.0]
We propose a framework that concurrently learns safety constraints and optimal RL policies.
The framework is underpinned by theorems that establish the convergence of our joint learning process.
We showcased our framework in grid-world environments, successfully identifying both acceptable safety constraints and RL policies.
arXiv Detail & Related papers (2023-04-30T21:15:07Z) - Computational-level Analysis of Constraint Compliance for General
Intelligence [4.383011485317949]
Rules, manners' laws, and moral imperatives are examples of classes of constraints that govern human behavior.
Despite such messiness, humans incorporate constraints in their decisions robustly and rapidly.
General, artificially-intelligent agents must also be able to navigate the messiness of systems of real-world constraints.
arXiv Detail & Related papers (2023-03-08T03:25:24Z) - Reinforcement Learning with Stepwise Fairness Constraints [50.538878453547966]
We introduce the study of reinforcement learning with stepwise fairness constraints.
We provide learning algorithms with strong theoretical guarantees in regard to policy optimality and fairness violation.
arXiv Detail & Related papers (2022-11-08T04:06:23Z) - On Rate-Distortion Theory in Capacity-Limited Cognition & Reinforcement
Learning [43.19983737333797]
Decision-making agents in the real world do so under limited information-processing capabilities and without access to cognitive or computational resources.
We present a brief survey of information-theoretic models of capacity-limited decision making in biological and artificial agents.
arXiv Detail & Related papers (2022-10-30T16:39:40Z) - Constrained Learning with Non-Convex Losses [119.8736858597118]
Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
arXiv Detail & Related papers (2021-03-08T23:10:33Z) - Constrained episodic reinforcement learning in concave-convex and
knapsack settings [81.08055425644037]
We provide a modular analysis with strong theoretical guarantees for settings with concave rewards and convex constraints.
Our experiments demonstrate that the proposed algorithm significantly outperforms these approaches in existing constrained episodic environments.
arXiv Detail & Related papers (2020-06-09T05:02:44Z) - Lagrangian Duality for Constrained Deep Learning [51.2216183850835]
This paper explores the potential of Lagrangian duality for learning applications that feature complex constraints.
In energy domains, the combination of Lagrangian duality and deep learning can be used to obtain state-of-the-art results.
In transprecision computing, Lagrangian duality can complement deep learning to impose monotonicity constraints on the predictor.
arXiv Detail & Related papers (2020-01-26T03:38:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.