Generative AI-based closed-loop fMRI system
- URL: http://arxiv.org/abs/2401.16742v1
- Date: Tue, 30 Jan 2024 04:40:49 GMT
- Title: Generative AI-based closed-loop fMRI system
- Authors: Mikihiro Kasahara, Taiki Oka, Vincent Taschereau-Dumouchel, Mitsuo
Kawato, Hiroki Takakura, Aurelio Cortese
- Abstract summary: DecNefGAN is a novel framework that combines a generative adversarial system and a neural reinforcement model.
It can contribute to elucidating how the human brain responds to and counteracts the potential influence of generative AI.
- Score: 0.41942958779358674
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While generative AI is now widespread and useful in society, there are
potential risks of misuse, e.g., unconsciously influencing cognitive processes
or decision-making. Although this causes a security problem in the cognitive
domain, there has been no research about neural and computational mechanisms
counteracting the impact of malicious generative AI in humans. We propose
DecNefGAN, a novel framework that combines a generative adversarial system and
a neural reinforcement model. More specifically, DecNefGAN bridges human and
generative AI in a closed-loop system, with the AI creating stimuli that induce
specific mental states, thus exerting external control over neural activity.
The objective of the human is the opposite, to compete and reach an orthogonal
mental state. This framework can contribute to elucidating how the human brain
responds to and counteracts the potential influence of generative AI.
Related papers
- NeuroAI for AI Safety [2.0243003325958253]
Humans are the only known agents capable of general intelligence.
Neuroscience may hold important keys to technical AI safety that are currently underexplored and underutilized.
We highlight and critically evaluate several paths toward AI safety inspired by neuroscience.
arXiv Detail & Related papers (2024-11-27T17:18:51Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI [55.99010491370177]
We argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI.
anthropomorphic AI systems are increasingly prone to generating outputs that are perceived to be human-like.
arXiv Detail & Related papers (2024-10-11T04:57:41Z) - Is artificial consciousness achievable? Lessons from the human brain [0.0]
We analyse the question of developing artificial consciousness from an evolutionary perspective.
We take the evolution of the human brain and its relation with consciousness as a reference model.
We propose to clearly specify what is common and what differs in AI conscious processing from full human conscious experience.
arXiv Detail & Related papers (2024-04-18T12:59:44Z) - Advancing Explainable AI Toward Human-Like Intelligence: Forging the
Path to Artificial Brain [0.7770029179741429]
The intersection of Artificial Intelligence (AI) and neuroscience in Explainable AI (XAI) is pivotal for enhancing transparency and interpretability in complex decision-making processes.
This paper explores the evolution of XAI methodologies, ranging from feature-based to human-centric approaches.
The challenges in achieving explainability in generative models, ensuring responsible AI practices, and addressing ethical implications are discussed.
arXiv Detail & Related papers (2024-02-07T14:09:11Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Reflective Hybrid Intelligence for Meaningful Human Control in
Decision-Support Systems [4.1454448964078585]
We introduce the notion of self-reflective AI systems for meaningful human control over AI systems.
We propose a framework that integrates knowledge from psychology and philosophy with formal reasoning methods and machine learning approaches.
We argue that self-reflective AI systems can lead to self-reflective hybrid systems (human + AI)
arXiv Detail & Related papers (2023-07-12T13:32:24Z) - BIASeD: Bringing Irrationality into Automated System Design [12.754146668390828]
We claim that the future of human-machine collaboration will entail the development of AI systems that model, understand and possibly replicate human cognitive biases.
We categorize existing cognitive biases from the perspective of AI systems, identify three broad areas of interest and outline research directions for the design of AI systems that have a better understanding of our own biases.
arXiv Detail & Related papers (2022-10-01T02:52:38Z) - Dynamic Cognition Applied to Value Learning in Artificial Intelligence [0.0]
Several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence.
It is of utmost importance that artificial intelligent agents have their values aligned with human values.
A possible approach to this problem would be to use theoretical models such as SED.
arXiv Detail & Related papers (2020-05-12T03:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.