The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI
Research Reduce Misuse?
- URL: http://arxiv.org/abs/2001.00463v2
- Date: Thu, 9 Jan 2020 23:24:21 GMT
- Title: The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI
Research Reduce Misuse?
- Authors: Toby Shevlane, Allan Dafoe
- Abstract summary: There is growing concern over the potential misuse of artificial intelligence (AI) research.
Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse.
This paper addresses the balance between these two effects.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is growing concern over the potential misuse of artificial intelligence
(AI) research. Publishing scientific research can facilitate misuse of the
technology, but the research can also contribute to protections against misuse.
This paper addresses the balance between these two effects. Our theoretical
framework elucidates the factors governing whether the published research will
be more useful for attackers or defenders, such as the possibility for adequate
defensive measures, or the independent discovery of the knowledge outside of
the scientific community. The balance will vary across scientific fields.
However, we show that the existing conversation within AI has imported concepts
and conclusions from prior debates within computer security over the disclosure
of software vulnerabilities. While disclosure of software vulnerabilities often
favours defence, this cannot be assumed for AI research. The AI research
community should consider concepts and policies from a broad set of adjacent
fields, and ultimately needs to craft policy well-suited to its particular
challenges.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Quantifying the Benefit of Artificial Intelligence for Scientific Research [2.4700789675440524]
We estimate both the direct use of AI and the potential benefit of AI in scientific research.
We find that the use of AI in research is widespread throughout the sciences, growing especially rapidly since 2015.
Our analysis reveals considerable potential for AI to benefit numerous scientific fields, yet a notable disconnect exists between AI education and its research applications.
arXiv Detail & Related papers (2023-04-17T08:08:50Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - A narrowing of AI research? [0.0]
We study the evolution of the thematic diversity of AI research in academia and the private sector.
We measure the influence of private companies in AI research through the citations they receive and their collaborations with other institutions.
arXiv Detail & Related papers (2020-09-22T08:23:56Z) - The Threats of Artificial Intelligence Scale (TAI). Development,
Measurement and Test Over Three Application Domains [0.0]
Several opinion polls frequently query the public fear of autonomous robots and artificial intelligence (FARAI)
We propose a fine-grained scale to measure threat perceptions of AI that accounts for four functional classes of AI systems and is applicable to various domains of AI applications.
The data support the dimensional structure of the proposed Threats of AI (TAI) scale as well as the internal consistency and factoral validity of the indicators.
arXiv Detail & Related papers (2020-06-12T14:15:02Z) - AI safety: state of the field through quantitative lens [0.0]
AI safety is a relatively new field of research focused on techniques for building AI beneficial for humans.
There is a severe lack of research into concrete policies regarding AI.
As we expect AI to be the main driving forces of changes in society, AI safety is the field under which we need to decide the direction of humanity's future.
arXiv Detail & Related papers (2020-02-12T11:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.