IndigoVX: Where Human Intelligence Meets AI for Optimal Decision Making
- URL: http://arxiv.org/abs/2307.11516v1
- Date: Fri, 21 Jul 2023 11:54:53 GMT
- Title: IndigoVX: Where Human Intelligence Meets AI for Optimal Decision Making
- Authors: Kais Dukes
- Abstract summary: This paper defines a new approach for augmenting human intelligence with AI for optimal goal solving.
Our proposed AI, Indigo, is an acronym for Informed Numerical Decision-making through Iterative Goal-Oriented optimization.
We envisage this method being applied to games or business strategies, with the human providing strategic context and the AI offering optimal, data-driven moves.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper defines a new approach for augmenting human intelligence with AI
for optimal goal solving. Our proposed AI, Indigo, is an acronym for Informed
Numerical Decision-making through Iterative Goal-Oriented optimization. When
combined with a human collaborator, we term the joint system IndigoVX, for
Virtual eXpert. The system is conceptually simple. We envisage this method
being applied to games or business strategies, with the human providing
strategic context and the AI offering optimal, data-driven moves. Indigo
operates through an iterative feedback loop, harnessing the human expert's
contextual knowledge and the AI's data-driven insights to craft and refine
strategies towards a well-defined goal. Using a quantified three-score schema,
this hybridization allows the combined team to evaluate strategies and refine
their plan, while adapting to challenges and changes in real-time.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Beyond Prompts: Learning from Human Communication for Enhanced AI Intent Alignment [30.93897332124916]
We study human strategies for intent specification in human-human communication.
This study aims to advance toward a human-centered AI system by bringing together human communication strategies for the design of AI systems.
arXiv Detail & Related papers (2024-05-09T11:10:29Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Decision-Oriented Dialogue for Human-AI Collaboration [62.367222979251444]
We describe a class of tasks called decision-oriented dialogues, in which AI assistants such as large language models (LMs) must collaborate with one or more humans via natural language to help them make complex decisions.
We formalize three domains in which users face everyday decisions: (1) choosing an assignment of reviewers to conference papers, (2) planning a multi-step itinerary in a city, and (3) negotiating travel plans for a group of friends.
For each task, we build a dialogue environment where agents receive a reward based on the quality of the final decision they reach.
arXiv Detail & Related papers (2023-05-31T17:50:02Z) - Learning Complementary Policies for Human-AI Teams [22.13683008398939]
We propose a framework for a novel human-AI collaboration for selecting advantageous course of action.
Our solution aims to exploit the human-AI complementarity to maximize decision rewards.
arXiv Detail & Related papers (2023-02-06T17:22:18Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Blessing from Human-AI Interaction: Super Reinforcement Learning in
Confounded Environments [19.944163846660498]
We introduce the paradigm of super reinforcement learning that takes advantage of Human-AI interaction for data driven sequential decision making.
In the decision process with unmeasured confounding, the actions taken by past agents can offer valuable insights into undisclosed information.
We develop several super-policy learning algorithms and systematically study their theoretical properties.
arXiv Detail & Related papers (2022-09-29T16:03:07Z) - Towards Explainable Artificial Intelligence in Banking and Financial
Services [0.0]
We study and analyze the recent work done in Explainable Artificial Intelligence (XAI) methods and tools.
We introduce a novel XAI process, which facilitates producing explainable models while maintaining a high level of learning performance.
We develop a digital dashboard to facilitate interacting with the algorithm results.
arXiv Detail & Related papers (2021-12-14T08:02:13Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.