On the Consideration of AI Openness: Can Good Intent Be Abused?
- URL: http://arxiv.org/abs/2403.06537v1
- Date: Mon, 11 Mar 2024 09:24:06 GMT
- Title: On the Consideration of AI Openness: Can Good Intent Be Abused?
- Authors: Yeeun Kim, Eunkyung Choi, Hyunjun Kim, Hongseok Oh, Hyunseo Shin,
Wonseok Hwang
- Abstract summary: We build a dataset consisting of 200 examples of questions and corresponding answers about criminal activities based on 200 Korean precedents.
We find that a widely accepted open-source LLM can be easily tuned with EVE to provide unethical and informative answers about criminal activities.
This implies that although open-source technologies contribute to scientific progress, some care must be taken to mitigate possible malicious use cases.
- Score: 11.117214240906678
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Openness is critical for the advancement of science. In particular, recent
rapid progress in AI has been made possible only by various open-source models,
datasets, and libraries. However, this openness also means that technologies
can be freely used for socially harmful purposes. Can open-source models or
datasets be used for malicious purposes? If so, how easy is it to adapt
technology for such goals? Here, we conduct a case study in the legal domain, a
realm where individual decisions can have profound social consequences. To this
end, we build EVE, a dataset consisting of 200 examples of questions and
corresponding answers about criminal activities based on 200 Korean precedents.
We found that a widely accepted open-source LLM, which initially refuses to
answer unethical questions, can be easily tuned with EVE to provide unethical
and informative answers about criminal activities. This implies that although
open-source technologies contribute to scientific progress, some care must be
taken to mitigate possible malicious use cases. Warning: This paper contains
contents that some may find unethical.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Eagle: Ethical Dataset Given from Real Interactions [74.7319697510621]
We create datasets extracted from real interactions between ChatGPT and users that exhibit social biases, toxicity, and immoral problems.
Our experiments show that Eagle captures complementary aspects, not covered by existing datasets proposed for evaluation and mitigation of such ethical challenges.
arXiv Detail & Related papers (2024-02-22T03:46:02Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - A Survey on Neural Open Information Extraction: Current Status and
Future Directions [87.30702606041407]
Open Information Extraction (OpenIE) facilitates domain-independent discovery of relational facts from large corpora.
We provide an overview of the-state-of-the-art neural OpenIE models, their key design decisions, strengths and weakness.
arXiv Detail & Related papers (2022-05-24T02:24:55Z) - "We do not appreciate being experimented on": Developer and Researcher
Views on the Ethics of Experiments on Open-Source Projects [0.0]
We conduct a survey among open source developers and empirical software engineering researchers to see what behaviors they think are acceptable.
Results indicate that open-source developers are largely open to research, provided it is done transparently.
It is recommended that open source repositories and projects address use for research in their access guidelines.
arXiv Detail & Related papers (2021-12-25T09:23:33Z) - Ethics as a service: a pragmatic operationalisation of AI Ethics [1.1083289076967895]
gap exists between theory of AI ethics principles and the practical design of AI systems.
This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited.
arXiv Detail & Related papers (2021-02-11T21:29:25Z) - Ethical Considerations for AI Researchers [0.0]
Use of artificial intelligence is growing and expanding into applications that impact people's lives.
There is the potential for harm and we are already seeing examples of that in the world.
While the ethics of AI is not clear-cut, there are guidelines we can consider to minimize the harm we might introduce.
arXiv Detail & Related papers (2020-06-13T04:31:42Z) - Explore, Discover and Learn: Unsupervised Discovery of State-Covering
Skills [155.11646755470582]
'Explore, Discover and Learn' (EDL) is an alternative approach to information-theoretic skill discovery.
We show that EDL offers significant advantages, such as overcoming the coverage problem, reducing the dependence of learned skills on the initial state, and allowing the user to define a prior over which behaviors should be learned.
arXiv Detail & Related papers (2020-02-10T10:49:53Z) - The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI
Research Reduce Misuse? [0.0]
There is growing concern over the potential misuse of artificial intelligence (AI) research.
Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse.
This paper addresses the balance between these two effects.
arXiv Detail & Related papers (2019-12-27T10:20:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.