AI safety: state of the field through quantitative lens
- URL: http://arxiv.org/abs/2002.05671v2
- Date: Thu, 9 Jul 2020 11:32:14 GMT
- Title: AI safety: state of the field through quantitative lens
- Authors: Mislav Juric, Agneza Sandic, Mario Brcic
- Abstract summary: AI safety is a relatively new field of research focused on techniques for building AI beneficial for humans.
There is a severe lack of research into concrete policies regarding AI.
As we expect AI to be the main driving forces of changes in society, AI safety is the field under which we need to decide the direction of humanity's future.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Last decade has seen major improvements in the performance of artificial
intelligence which has driven wide-spread applications. Unforeseen effects of
such mass-adoption has put the notion of AI safety into the public eye. AI
safety is a relatively new field of research focused on techniques for building
AI beneficial for humans. While there exist survey papers for the field of AI
safety, there is a lack of a quantitative look at the research being conducted.
The quantitative aspect gives a data-driven insight about the emerging trends,
knowledge gaps and potential areas for future research. In this paper,
bibliometric analysis of the literature finds significant increase in research
activity since 2015. Also, the field is so new that most of the technical
issues are open, including: explainability with its long-term utility, and
value alignment which we have identified as the most important long-term
research topic. Equally, there is a severe lack of research into concrete
policies regarding AI. As we expect AI to be the one of the main driving forces
of changes in society, AI safety is the field under which we need to decide the
direction of humanity's future.
Related papers
- Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - A Survey of AI Reliance [1.6124402884077915]
Current shortcomings in the literature include unclear influences on AI reliance, lack of external validity, conflicting approaches to measuring reliance, and disregard for a change in reliance over time.
In conclusion, we present a morphological box that serves as a guide for research on AI reliance.
arXiv Detail & Related papers (2024-07-22T09:34:58Z) - A Survey of the Potential Long-term Impacts of AI [0.0]
It is increasingly recognised that advances in artificial intelligence could have large and long-lasting impacts on society.
We identify and discuss five potential long-term impacts of AI.
arXiv Detail & Related papers (2022-06-22T13:42:28Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z) - The Threats of Artificial Intelligence Scale (TAI). Development,
Measurement and Test Over Three Application Domains [0.0]
Several opinion polls frequently query the public fear of autonomous robots and artificial intelligence (FARAI)
We propose a fine-grained scale to measure threat perceptions of AI that accounts for four functional classes of AI systems and is applicable to various domains of AI applications.
The data support the dimensional structure of the proposed Threats of AI (TAI) scale as well as the internal consistency and factoral validity of the indicators.
arXiv Detail & Related papers (2020-06-12T14:15:02Z) - The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI
Research Reduce Misuse? [0.0]
There is growing concern over the potential misuse of artificial intelligence (AI) research.
Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse.
This paper addresses the balance between these two effects.
arXiv Detail & Related papers (2019-12-27T10:20:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.