Control Risk for Potential Misuse of Artificial Intelligence in Science
- URL: http://arxiv.org/abs/2312.06632v1
- Date: Mon, 11 Dec 2023 18:50:57 GMT
- Title: Control Risk for Potential Misuse of Artificial Intelligence in Science
- Authors: Jiyan He, Weitao Feng, Yaosen Min, Jingwei Yi, Kunsheng Tang, Shuai
Li, Jie Zhang, Kejiang Chen, Wenbo Zhou, Xing Xie, Weiming Zhang, Nenghai Yu,
Shuxin Zheng
- Abstract summary: We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
- Score: 85.91232985405554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The expanding application of Artificial Intelligence (AI) in scientific
fields presents unprecedented opportunities for discovery and innovation.
However, this growth is not without risks. AI models in science, if misused,
can amplify risks like creation of harmful substances, or circumvention of
established regulations. In this study, we aim to raise awareness of the
dangers of AI misuse in science, and call for responsible AI development and
use in this domain. We first itemize the risks posed by AI in scientific
contexts, then demonstrate the risks by highlighting real-world examples of
misuse in chemical science. These instances underscore the need for effective
risk management strategies. In response, we propose a system called SciGuard to
control misuse risks for AI models in science. We also propose a red-teaming
benchmark SciMT-Safety to assess the safety of different systems. Our proposed
SciGuard shows the least harmful impact in the assessment without compromising
performance in benign tests. Finally, we highlight the need for a
multidisciplinary and collaborative effort to ensure the safe and ethical use
of AI models in science. We hope that our study can spark productive
discussions on using AI ethically in science among researchers, practitioners,
policymakers, and the public, to maximize benefits and minimize the risks of
misuse.
Related papers
- Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models [0.0]
We argue that AI evaluations model should prioritize addressing high-consequence risks.
These risks could cause large-scale harm to the public, such as pandemics.
Scientists' experience with identifying and mitigating dual-use biological risks can help inform new approaches to evaluating biological AI models.
arXiv Detail & Related papers (2024-05-25T16:29:17Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Risk assessment at AGI companies: A review of popular risk assessment
techniques from other safety-critical industries [0.0]
Companies like OpenAI, Google DeepMind, and Anthropic have the stated goal of building artificial general intelligence (AGI)
There are increasing concerns that AGI would pose catastrophic risks.
This paper reviews popular risk assessment techniques from other safety-critical industries and suggests ways in which AGI companies could use them.
arXiv Detail & Related papers (2023-07-17T20:36:51Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - X-Risk Analysis for AI Research [24.78742908726579]
We provide a guide for how to analyze AI x-risk.
First, we review how systems can be made safer today.
Next, we discuss strategies for having long-term impacts on the safety of future systems.
arXiv Detail & Related papers (2022-06-13T00:22:50Z) - The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI
Research Reduce Misuse? [0.0]
There is growing concern over the potential misuse of artificial intelligence (AI) research.
Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse.
This paper addresses the balance between these two effects.
arXiv Detail & Related papers (2019-12-27T10:20:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.