Un jeu a debattre pour sensibiliser a l'Intelligence Artificielle dans
le contexte de la pandemie de COVID-19
- URL: http://arxiv.org/abs/2304.12186v1
- Date: Wed, 19 Apr 2023 09:06:10 GMT
- Title: Un jeu a debattre pour sensibiliser a l'Intelligence Artificielle dans
le contexte de la pandemie de COVID-19
- Authors: Carole Adam, C\'edric Lauradoux
- Abstract summary: We propose a serious game in the form of a civic debate aiming at selecting an AI solution to control a pandemic.
This game is targeted at high school students, it was first experimented during a science fair, and is now available freely.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Artificial Intelligence is more and more pervasive in our lives. Many
important decisions are delegated to AI algorithms: accessing higher education,
determining prison sentences, autonomously driving vehicles... Engineers and
researchers are educated to this field, while the general population has very
little knowledge about AI. As a result, they are very sensitive to the (more or
less accurate) ideas disseminated by the media: an AI that is unbiased,
infallible, and will either save the world or lead to its demise. We therefore
believe, as highlighted by UNESCO, that it is essential to provide the
population with a general understanding of AI algorithms, so that they can
choose wisely whether to use them (or not). To this end, we propose a serious
game in the form of a civic debate aiming at selecting an AI solution to
control a pandemic. This game is targeted at high school students, it was first
experimented during a science fair, and is now available freely.
Related papers
- Need of AI in Modern Education: in the Eyes of Explainable AI (xAI) [0.0]
This chapter tries to shed light on the complex ways AI operates, especially concerning biases.
These are the foundational steps towards better educational policies, which include using AI in ways that are more reliable, accountable, and beneficial for everyone involved.
arXiv Detail & Related papers (2024-07-31T08:11:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - AI in Games: Techniques, Challenges and Opportunities [40.86375378643978]
Various game AI systems (AIs) have been developed such as Libratus, OpenAI Five and AlphaStar, beating professional human players.
In this paper, we survey recent successful game AIs, covering board game AIs, card game AIs, first-person shooting game AIs and real time strategy game AIs.
arXiv Detail & Related papers (2021-11-15T09:35:53Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Role of Social Movements, Coalitions, and Workers in Resisting
Harmful Artificial Intelligence and Contributing to the Development of
Responsible AI [0.0]
Coalitions in all sectors are acting worldwide to resist hamful applications of AI.
There are biased, wrongful, and disturbing assumptions embedded in AI algorithms.
Perhaps one of the greatest contributions of AI will be to make us understand how important human wisdom truly is in life on earth.
arXiv Detail & Related papers (2021-07-11T18:51:29Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - "Weak AI" is Likely to Never Become "Strong AI", So What is its Greatest
Value for us? [4.497097230665825]
Many researchers argue that little substantial progress has been made for AI in recent decades.
Author explains why controversies about AI exist; (2) discriminates two paradigms of AI research, termed "weak AI" and "strong AI"
arXiv Detail & Related papers (2021-03-29T02:57:48Z) - Towards AI Forensics: Did the Artificial Intelligence System Do It? [2.5991265608180396]
We focus on AI that is potentially malicious by design'' and grey box analysis.
Our evaluation using convolutional neural networks illustrates challenges and ideas for identifying malicious AI.
arXiv Detail & Related papers (2020-05-27T20:28:19Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.