The Threats of Artificial Intelligence Scale (TAI). Development,
Measurement and Test Over Three Application Domains
- URL: http://arxiv.org/abs/2006.07211v1
- Date: Fri, 12 Jun 2020 14:15:02 GMT
- Title: The Threats of Artificial Intelligence Scale (TAI). Development,
Measurement and Test Over Three Application Domains
- Authors: Kimon Kieslich, Marco L\"unich, Frank Marcinkowski
- Abstract summary: Several opinion polls frequently query the public fear of autonomous robots and artificial intelligence (FARAI)
We propose a fine-grained scale to measure threat perceptions of AI that accounts for four functional classes of AI systems and is applicable to various domains of AI applications.
The data support the dimensional structure of the proposed Threats of AI (TAI) scale as well as the internal consistency and factoral validity of the indicators.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years Artificial Intelligence (AI) has gained much popularity, with
the scientific community as well as with the public. AI is often ascribed many
positive impacts for different social domains such as medicine and the economy.
On the other side, there is also growing concern about its precarious impact on
society and individuals. Several opinion polls frequently query the public fear
of autonomous robots and artificial intelligence (FARAI), a phenomenon coming
also into scholarly focus. As potential threat perceptions arguably vary with
regard to the reach and consequences of AI functionalities and the domain of
application, research still lacks necessary precision of a respective
measurement that allows for wide-spread research applicability. We propose a
fine-grained scale to measure threat perceptions of AI that accounts for four
functional classes of AI systems and is applicable to various domains of AI
applications. Using a standardized questionnaire in a survey study (N=891), we
evaluate the scale over three distinct AI domains (loan origination, job
recruitment and medical treatment). The data support the dimensional structure
of the proposed Threats of AI (TAI) scale as well as the internal consistency
and factoral validity of the indicators. Implications of the results and the
empirical application of the scale are discussed in detail. Recommendations for
further empirical use of the TAI scale are provided.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - A Survey on Offensive AI Within Cybersecurity [1.8206461789819075]
This survey paper on offensive AI will comprehensively cover various aspects related to attacks against and using AI systems.
It will delve into the impact of offensive AI practices on different domains, including consumer, enterprise, and public digital infrastructure.
The paper will explore adversarial machine learning, attacks against AI models, infrastructure, and interfaces, along with offensive techniques like information gathering, social engineering, and weaponized AI.
arXiv Detail & Related papers (2024-09-26T17:36:22Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - General Purpose Artificial Intelligence Systems (GPAIS): Properties,
Definition, Taxonomy, Societal Implications and Responsible Governance [16.030931070783637]
General-Purpose Artificial Intelligence Systems (GPAIS) has been defined to refer to these AI systems.
To date, the possibility of an Artificial General Intelligence, powerful enough to perform any intellectual task as if it were human, or even improve it, has remained an aspiration, fiction, and considered a risk for our society.
This work discusses existing definitions for GPAIS and proposes a new definition that allows for a gradual differentiation among types of GPAIS according to their properties and limitations.
arXiv Detail & Related papers (2023-07-26T16:35:48Z) - Bending the Automation Bias Curve: A Study of Human and AI-based
Decision Making in National Security Contexts [0.0]
We theorize about the relationship between background knowledge about AI, trust in AI, and how these interact with other factors to influence the probability of automation bias.
We test these in a preregistered task identification experiment across a representative sample of 9000 adults in 9 countries with varying levels of AI industries.
arXiv Detail & Related papers (2023-06-28T18:57:36Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Comprehensive systematic review into combinations of artificial
intelligence, human factors, and automation [0.0]
It is important to consider human factors in application of AI in automation.
The main areas of application in physical and cognitive ergonomics are including transportation, User experience, and human-machine interactions.
arXiv Detail & Related papers (2021-04-09T19:01:15Z) - A narrowing of AI research? [0.0]
We study the evolution of the thematic diversity of AI research in academia and the private sector.
We measure the influence of private companies in AI research through the citations they receive and their collaborations with other institutions.
arXiv Detail & Related papers (2020-09-22T08:23:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.