Aligning Generalisation Between Humans and Machines
- URL: http://arxiv.org/abs/2411.15626v1
- Date: Sat, 23 Nov 2024 18:36:07 GMT
- Title: Aligning Generalisation Between Humans and Machines
- Authors: Filip Ilievski, Barbara Hammer, Frank van Harmelen, Benjamin Paassen, Sascha Saralajew, Ute Schmid, Michael Biehl, Marianna Bolognesi, Xin Luna Dong, Kiril Gashteovski, Pascal Hitzler, Giuseppe Marra, Pasquale Minervini, Martin Mundt, Axel-Cyrille Ngonga Ngomo, Alessandro Oltramari, Gabriella Pasi, Zeynep G. Saribatur, Luciano Serafini, John Shawe-Taylor, Vered Shwartz, Gabriella Skitalinskaya, Clemens Stachl, Gido M. van de Ven, Thomas Villmann,
- Abstract summary: Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
- Score: 74.120848518198
- License:
- Abstract: Recent advances in AI -- including generative approaches -- have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals. The responsible use of AI increasingly shows the need for human-AI teaming, necessitating effective interaction between humans and machines. A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise. In cognitive science, human generalisation commonly involves abstraction and concept learning. In contrast, AI generalisation encompasses out-of-domain generalisation in machine learning, rule-based reasoning in symbolic AI, and abstraction in neuro-symbolic AI. In this perspective paper, we combine insights from AI and cognitive science to identify key commonalities and differences across three dimensions: notions of generalisation, methods for generalisation, and evaluation of generalisation. We map the different conceptualisations of generalisation in AI and cognitive science along these three dimensions and consider their role in human-AI teaming. This results in interdisciplinary challenges across AI and cognitive science that must be tackled to provide a foundation for effective and cognitively supported alignment in human-AI teaming scenarios.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Rolling in the deep of cognitive and AI biases [1.556153237434314]
We argue that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed.
We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview.
We introduce a new mapping, which justifies the humans to AI biases and we detect relevant fairness intensities and inter-dependencies.
arXiv Detail & Related papers (2024-07-30T21:34:04Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - The Interplay of Learning, Analytics, and Artificial Intelligence in Education: A Vision for Hybrid Intelligence [0.45207442500313766]
I challenge the prevalent narrow conceptualisation of AI as tools, and argue for the importance of alternative conceptualisations of AI.
I highlight the differences between human intelligence and artificial information processing, and posit that AI can also serve as an instrument for understanding human learning.
The paper presents three unique conceptualisations of AI: the externalization of human cognition, the internalization of AI models to influence human mental models, and the extension of human cognition via tightly coupled human-AI hybrid intelligence systems.
arXiv Detail & Related papers (2024-03-24T10:07:46Z) - Advancing Explainable AI Toward Human-Like Intelligence: Forging the
Path to Artificial Brain [0.7770029179741429]
The intersection of Artificial Intelligence (AI) and neuroscience in Explainable AI (XAI) is pivotal for enhancing transparency and interpretability in complex decision-making processes.
This paper explores the evolution of XAI methodologies, ranging from feature-based to human-centric approaches.
The challenges in achieving explainability in generative models, ensuring responsible AI practices, and addressing ethical implications are discussed.
arXiv Detail & Related papers (2024-02-07T14:09:11Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - BIASeD: Bringing Irrationality into Automated System Design [12.754146668390828]
We claim that the future of human-machine collaboration will entail the development of AI systems that model, understand and possibly replicate human cognitive biases.
We categorize existing cognitive biases from the perspective of AI systems, identify three broad areas of interest and outline research directions for the design of AI systems that have a better understanding of our own biases.
arXiv Detail & Related papers (2022-10-01T02:52:38Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - AI-Mediated Exchange Theory [9.100327242239203]
We propose the development of a framework AI-Mediated Exchange Theory (AI-MET)
As an extension to Social Exchange Theory (SET) in the social sciences, AI-MET views AI as influencing human-to-human relationships via a taxonomy of mediation mechanisms.
arXiv Detail & Related papers (2020-03-04T14:18:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.