Artificial Intelligence Narratives: An Objective Perspective on Current
Developments
- URL: http://arxiv.org/abs/2103.11961v1
- Date: Thu, 18 Mar 2021 17:33:00 GMT
- Title: Artificial Intelligence Narratives: An Objective Perspective on Current
Developments
- Authors: Noah Klarmann
- Abstract summary: This work provides a starting point for researchers interested in gaining a deeper understanding of the big picture of artificial intelligence (AI)
An essential takeaway for the reader is that AI must be understood as an umbrella term encompassing a plethora of different methods, schools of thought, and their respective historical movements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This work provides a starting point for researchers interested in gaining a
deeper understanding of the big picture of artificial intelligence (AI). To
this end, a narrative is conveyed that allows the reader to develop an
objective view on current developments that is free from false promises that
dominate public communication. An essential takeaway for the reader is that AI
must be understood as an umbrella term encompassing a plethora of different
methods, schools of thought, and their respective historical movements.
Consequently, a bottom-up strategy is pursued in which the field of AI is
introduced by presenting various aspects that are characteristic of the
subject. This paper is structured in three parts: (i) Discussion of current
trends revealing false public narratives, (ii) an introduction to the history
of AI focusing on recurring patterns and main characteristics, and (iii) a
critical discussion on the limitations of current methods in the context of the
potential emergence of a strong(er) AI. It should be noted that this work does
not cover any of these aspects holistically; rather, the content addressed is a
selection made by the author and subject to a didactic strategy.
Related papers
- Position: An Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience [4.524832437237367]
Inner Interpretability is a promising field tasked with uncovering the inner mechanisms of AI systems.
Recent critiques raise issues that question its usefulness to advance the broader goals of AI.
Here we draw the relevant connections and highlight lessons that can be transferred productively between fields.
arXiv Detail & Related papers (2024-06-03T14:16:56Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Unmasking the Shadows of AI: Investigating Deceptive Capabilities in Large Language Models [0.0]
This research critically navigates the intricate landscape of AI deception, concentrating on deceptive behaviours of Large Language Models (LLMs)
My objective is to elucidate this issue, examine the discourse surrounding it, and subsequently delve into its categorization and ramifications.
arXiv Detail & Related papers (2024-02-07T00:21:46Z) - Language Models: A Guide for the Perplexed [51.88841610098437]
This tutorial aims to help narrow the gap between those who study language models and those who are intrigued and want to learn more.
We offer a scientific viewpoint that focuses on questions amenable to study through experimentation.
We situate language models as they are today in the context of the research that led to their development.
arXiv Detail & Related papers (2023-11-29T01:19:02Z) - Techniques for supercharging academic writing with generative AI [0.0]
This Perspective maps out principles and methods for using generative artificial intelligence (AI) to elevate the quality and efficiency of academic writing.
We introduce a human-AI collaborative framework that delineates the rationale (why), process (how), and nature (what) of AI engagement in writing.
arXiv Detail & Related papers (2023-10-26T04:35:00Z) - Towards Possibilities & Impossibilities of AI-generated Text Detection:
A Survey [97.33926242130732]
Large Language Models (LLMs) have revolutionized the domain of natural language processing (NLP) with remarkable capabilities of generating human-like text responses.
Despite these advancements, several works in the existing literature have raised serious concerns about the potential misuse of LLMs.
To address these concerns, a consensus among the research community is to develop algorithmic solutions to detect AI-generated text.
arXiv Detail & Related papers (2023-10-23T18:11:32Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
Stakeholder Perspective on XAI and a Conceptual Model Guiding
Interdisciplinary XAI Research [0.8707090176854576]
Main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems.
It often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata.
arXiv Detail & Related papers (2021-02-15T19:54:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.