Building Symbiotic AI: Reviewing the AI Act for a Human-Centred, Principle-Based Framework
- URL: http://arxiv.org/abs/2501.08046v2
- Date: Thu, 13 Feb 2025 14:34:52 GMT
- Title: Building Symbiotic AI: Reviewing the AI Act for a Human-Centred, Principle-Based Framework
- Authors: Miriana Calvano, Antonio Curci, Giuseppe Desolda, Andrea Esposito, Rosa Lanzilotti, Antonio Piccinno,
- Abstract summary: The European Union has released a new legal framework, the AI Act, to regulate AI.
At the same time, researchers offer a new perspective on AI systems, commonly known as Human-Centred AI (HCAI)
This article aims to identify principles that characterise the design and development of Symbiotic AI systems.
- Score: 3.723174617224632
- License:
- Abstract: Artificial Intelligence (AI) spreads quickly as new technologies and services take over modern society. The need to regulate AI design, development, and use is strictly necessary to avoid unethical and potentially dangerous consequences to humans. The European Union (EU) has released a new legal framework, the AI Act, to regulate AI by undertaking a risk-based approach to safeguard humans during interaction. At the same time, researchers offer a new perspective on AI systems, commonly known as Human-Centred AI (HCAI), highlighting the need for a human-centred approach to their design. In this context, Symbiotic AI (a subtype of HCAI) promises to enhance human capabilities through a deeper and continuous collaboration between human intelligence and AI. This article presents the results of a Systematic Literature Review (SLR) that aims to identify principles that characterise the design and development of Symbiotic AI systems while considering humans as the core of the process. Through content analysis, four principles emerged from the review that must be applied to create Human-Centred AI systems that can establish a symbiotic relationship with humans. In addition, current trends and challenges were defined to indicate open questions that may guide future research for the development of SAI systems that comply with the AI Act.
Related papers
- Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success [4.202570851109354]
The EU has enacted the AI Act, regulating market access for AI-based systems.
The Act focuses regulation on transparency, explainability, and the human ability to understand and control AI systems.
The EU issues a democratic call for human-centered AI systems and, in turn, an interdisciplinary research agenda for human-centered innovation in AI development.
arXiv Detail & Related papers (2024-02-22T17:35:29Z) - Advancing Explainable AI Toward Human-Like Intelligence: Forging the
Path to Artificial Brain [0.7770029179741429]
The intersection of Artificial Intelligence (AI) and neuroscience in Explainable AI (XAI) is pivotal for enhancing transparency and interpretability in complex decision-making processes.
This paper explores the evolution of XAI methodologies, ranging from feature-based to human-centric approaches.
The challenges in achieving explainability in generative models, ensuring responsible AI practices, and addressing ethical implications are discussed.
arXiv Detail & Related papers (2024-02-07T14:09:11Z) - Reflective Hybrid Intelligence for Meaningful Human Control in
Decision-Support Systems [4.1454448964078585]
We introduce the notion of self-reflective AI systems for meaningful human control over AI systems.
We propose a framework that integrates knowledge from psychology and philosophy with formal reasoning methods and machine learning approaches.
We argue that self-reflective AI systems can lead to self-reflective hybrid systems (human + AI)
arXiv Detail & Related papers (2023-07-12T13:32:24Z) - Applying HCAI in developing effective human-AI teaming: A perspective
from human-AI joint cognitive systems [10.746728034149989]
Research and application have used human-AI teaming (HAT) as a new paradigm to develop AI systems.
We elaborate on our proposed conceptual framework of human-AI joint cognitive systems (HAIJCS)
We propose a conceptual framework of human-AI joint cognitive systems (HAIJCS) to represent and implement HAT.
arXiv Detail & Related papers (2023-07-08T06:26:38Z) - Intent-aligned AI systems deplete human agency: the need for agency
foundations research in AI safety [2.3572498744567127]
We argue that alignment to human intent is insufficient for safe AI systems.
We argue that preservation of long-term agency of humans may be a more robust standard.
arXiv Detail & Related papers (2023-05-30T17:14:01Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.