FlyAI -- The Next Level of Artificial Intelligence is Unpredictable! Injecting Responses of a Living Fly into Decision Making
- URL: http://arxiv.org/abs/2410.12808v1
- Date: Mon, 30 Sep 2024 17:19:59 GMT
- Title: FlyAI -- The Next Level of Artificial Intelligence is Unpredictable! Injecting Responses of a Living Fly into Decision Making
- Authors: Denys J. C. Matthies, Ruben Schlonsak, Hanzhi Zhuang, Rui Song,
- Abstract summary: We introduce a new type of bionic AI that enhances decision-making unpredictability by incorporating responses from a living fly.
Our approach uses a fly's varied reactions, to tune an AI agent in the game of Gobang.
- Score: 6.694375709641935
- License:
- Abstract: In this paper, we introduce a new type of bionic AI that enhances decision-making unpredictability by incorporating responses from a living fly. Traditional AI systems, while reliable and predictable, lack nuanced and sometimes unseasoned decision-making seen in humans. Our approach uses a fly's varied reactions, to tune an AI agent in the game of Gobang. Through a study, we compare the performances of different strategies on altering AI agents and found a bionic AI agent to outperform human as well as conventional and white-noise enhanced AI agents. We contribute a new methodology for creating a bionic random function and strategies to enhance conventional AI agents ultimately improving unpredictability.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - The Role of Heuristics and Biases During Complex Choices with an AI
Teammate [0.0]
We argue that classic experimental methods are insufficient for studying complex choices made with AI helpers.
We show that framing and anchoring effects impact how people work with an AI helper and are predictive of choice outcomes.
arXiv Detail & Related papers (2023-01-14T20:06:43Z) - Improving Human-AI Collaboration With Descriptions of AI Behavior [14.904401331154062]
People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted.
To help people appropriately rely on AI aids, we propose showing them behavior descriptions.
arXiv Detail & Related papers (2023-01-06T00:33:08Z) - Measuring an artificial intelligence agent's trust in humans using
machine incentives [2.1016374925364616]
Gauging an AI agent's trust in humans is challenging because dishonesty might respond falsely about their trust in humans.
We present a method for incentivizing machine decisions without altering an AI agent's underlying algorithms or goal orientation.
Our experiments suggest that one of the most advanced AI language models to date alters its social behavior in response to incentives.
arXiv Detail & Related papers (2022-12-27T06:05:49Z) - On Avoiding Power-Seeking by Artificial Intelligence [93.9264437334683]
We do not know how to align a very intelligent AI agent's behavior with human interests.
I investigate whether we can build smart AI agents which have limited impact on the world, and which do not autonomously seek power.
arXiv Detail & Related papers (2022-06-23T16:56:21Z) - On the Influence of Explainable AI on Automation Bias [0.0]
We aim to shed light on the potential to influence automation bias by explainable AI (XAI)
We conduct an online experiment with regard to hotel review classifications and discuss first results.
arXiv Detail & Related papers (2022-04-19T12:54:23Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Instructive artificial intelligence (AI) for human training, assistance,
and explainability [0.24629531282150877]
We show how a neural network might instruct human trainees as an alternative to traditional approaches to explainable AI (XAI)
An AI examines human actions and calculates variations on the human strategy that lead to better performance.
Results will be presented on AI instruction's ability to improve human decision-making and human-AI teaming in Hanabi.
arXiv Detail & Related papers (2021-11-02T16:46:46Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.