Strategic Decisions Survey, Taxonomy, and Future Directions from
Artificial Intelligence Perspective
- URL: http://arxiv.org/abs/2210.12373v1
- Date: Sat, 22 Oct 2022 07:01:10 GMT
- Title: Strategic Decisions Survey, Taxonomy, and Future Directions from
Artificial Intelligence Perspective
- Authors: Caesar Wu, Kotagiri Ramamohanarao, Rui Zhang, Pascal Bouvry
- Abstract summary: We develop a systematic taxonomy of decision-making frames that consists of 6 bases, 18 categorical, and 54 frames.
Compared with traditional models, it covers irrational, non-rational and rational frames c dealing with certainty, uncertainty, complexity, ambiguity, chaos, and ignorance.
- Score: 15.649335092388897
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Strategic Decision-Making is always challenging because it is inherently
uncertain, ambiguous, risky, and complex. It is the art of possibility. We
develop a systematic taxonomy of decision-making frames that consists of 6
bases, 18 categorical, and 54 frames. We aim to lay out the computational
foundation that is possible to capture a comprehensive landscape view of a
strategic problem. Compared with traditional models, it covers irrational,
non-rational and rational frames c dealing with certainty, uncertainty,
complexity, ambiguity, chaos, and ignorance.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - RATT: A Thought Structure for Coherent and Correct LLM Reasoning [23.28162642780579]
We introduce the Retrieval Augmented Thought Tree (RATT), a novel thought structure that considers both overall logical soundness and factual correctness at each step of the thinking process.
A range of experiments on different types of tasks showcases that the RATT structure significantly outperforms existing methods in factual correctness and logical coherence.
arXiv Detail & Related papers (2024-06-04T20:02:52Z) - LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models [75.89014602596673]
Strategic reasoning requires understanding and predicting adversary actions in multi-agent settings while adjusting strategies accordingly.
We explore the scopes, applications, methodologies, and evaluation metrics related to strategic reasoning with Large Language Models.
It underscores the importance of strategic reasoning as a critical cognitive capability and offers insights into future research directions and potential improvements.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Trustworthy AI: Deciding What to Decide [41.10597843436572]
We propose a novel framework of Trustworthy AI (TAI) encompassing crucial components of AI.
We aim to use this framework to conduct the TAI experiments by quantitive and qualitative research methods.
We formulate an optimal prediction model for applying the strategic investment decision of credit default swaps (CDS) in the technology sector.
arXiv Detail & Related papers (2023-11-21T13:43:58Z) - Risk-reducing design and operations toolkit: 90 strategies for managing
risk and uncertainty in decision problems [65.268245109828]
This paper develops a catalog of such strategies and develops a framework for them.
It argues that they provide an efficient response to decision problems that are seemingly intractable due to high uncertainty.
It then proposes a framework to incorporate them into decision theory using multi-objective optimization.
arXiv Detail & Related papers (2023-09-06T16:14:32Z) - Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks [142.67349734180445]
Existing algorithms that provide risk-awareness to deep neural networks are complex and ad-hoc.
Here we present capsa, a framework for extending models with risk-awareness.
arXiv Detail & Related papers (2023-08-01T02:07:47Z) - Survey of Trustworthy AI: A Meta Decision of AI [0.41292255339309647]
Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI)
To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reliability, and sustainability.
arXiv Detail & Related papers (2023-06-01T06:25:01Z) - On solving decision and risk management problems subject to uncertainty [91.3755431537592]
Uncertainty is a pervasive challenge in decision and risk management.
This paper develops a systematic understanding of such strategies, determine their range of application, and develop a framework to better employ them.
arXiv Detail & Related papers (2023-01-18T19:16:23Z) - On the Complexity of Adversarial Decision Making [101.14158787665252]
We show that the Decision-Estimation Coefficient is necessary and sufficient to obtain low regret for adversarial decision making.
We provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures.
arXiv Detail & Related papers (2022-06-27T06:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.