Automated Machine Learning, Bounded Rationality, and Rational
Metareasoning
- URL: http://arxiv.org/abs/2109.04744v1
- Date: Fri, 10 Sep 2021 09:10:20 GMT
- Title: Automated Machine Learning, Bounded Rationality, and Rational
Metareasoning
- Authors: Eyke H\"ullermeier and Felix Mohr and Alexander Tornede and Marcel
Wever
- Abstract summary: We will look at automated machine learning (AutoML) and related problems from the perspective of bounded rationality.
Taking actions under bounded resources requires an agent to reflect on how to use these resources in an optimal way.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The notion of bounded rationality originated from the insight that perfectly
rational behavior cannot be realized by agents with limited cognitive or
computational resources. Research on bounded rationality, mainly initiated by
Herbert Simon, has a longstanding tradition in economics and the social
sciences, but also plays a major role in modern AI and intelligent agent
design. Taking actions under bounded resources requires an agent to reflect on
how to use these resources in an optimal way - hence, to reason and make
decisions on a meta-level. In this paper, we will look at automated machine
learning (AutoML) and related problems from the perspective of bounded
rationality, essentially viewing an AutoML tool as an agent that has to train a
model on a given set of data, and the search for a good way of doing so (a
suitable "ML pipeline") as deliberation on a meta-level.
Related papers
- Multi-Agent Reinforcement Learning for Autonomous Driving: A Survey [14.73689900685646]
Reinforcement Learning (RL) is a potent tool for sequential decision-making and has achieved performance surpassing human capabilities.
As the extension of RL in the multi-agent system domain, multi-agent RL (MARL) not only need to learn the control policy but also requires consideration regarding interactions with all other agents in the environment.
Simulators are crucial to obtain realistic data, which is the fundamentals of RL.
arXiv Detail & Related papers (2024-08-19T03:31:20Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Towards a Responsible AI Metrics Catalogue: A Collection of Metrics for
AI Accountability [28.67753149592534]
This study bridges the accountability gap by introducing our effort towards a comprehensive metrics catalogue.
Our catalogue delineates process metrics that underpin procedural integrity, resource metrics that provide necessary tools and frameworks, and product metrics that reflect the outputs of AI systems.
arXiv Detail & Related papers (2023-11-22T04:43:16Z) - LLM4Drive: A Survey of Large Language Models for Autonomous Driving [62.10344445241105]
Large language models (LLMs) have demonstrated abilities including understanding context, logical reasoning, and generating answers.
In this paper, we systematically review a research line about textitLarge Language Models for Autonomous Driving (LLM4AD).
arXiv Detail & Related papers (2023-11-02T07:23:33Z) - Decision-Making Among Bounded Rational Agents [5.24482648010213]
We introduce the concept of bounded rationality from an information-theoretic view into the game-theoretic framework.
This allows the robots to reason other agents' sub-optimal behaviors and act accordingly under their computational constraints.
We demonstrate that the resulting framework allows the robots to reason about different levels of rational behaviors of other agents and compute a reasonable strategy under its computational constraint.
arXiv Detail & Related papers (2022-10-17T00:29:24Z) - One-way Explainability Isn't The Message [2.618757282404254]
We argue that requirements on both human and machine in this context are significantly different.
The design of such human-machine systems should be driven by repeated, two-way intelligibility of information.
We propose operational principles -- we call them Intelligibility Axioms -- to guide the design of a collaborative decision-support system.
arXiv Detail & Related papers (2022-05-05T09:15:53Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally
Inattentive Reinforcement Learning [85.86440477005523]
We study more human-like RL agents which incorporate an established model of human-irrationality, the Rational Inattention (RI) model.
RIRL models the cost of cognitive information processing using mutual information.
We show that using RIRL yields a rich spectrum of new equilibrium behaviors that differ from those found under rational assumptions.
arXiv Detail & Related papers (2022-01-18T20:54:00Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.