Are ChatGPT and Other Similar Systems the Modern Lernaean Hydras of AI?
- URL: http://arxiv.org/abs/2306.09267v3
- Date: Tue, 30 Jan 2024 18:15:47 GMT
- Title: Are ChatGPT and Other Similar Systems the Modern Lernaean Hydras of AI?
- Authors: Dimitrios Ioannidis, Jeremy Kepner, Andrew Bowne, Harriet S. Bryant
- Abstract summary: Generative Artificial Intelligence systems ("AI systems") have created unprecedented social engagement.
They allegedly steal the open-source code stored in virtual libraries, known as repositories.
This Article focuses on how this happens and whether there is a solution that protects innovation and avoids years of litigation.
- Score: 1.3961068233384444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of Generative Artificial Intelligence systems ("AI systems") has
created unprecedented social engagement. AI code generation systems provide
responses (output) to questions or requests by accessing the vast library of
open-source code created by developers over the past few decades. However, they
do so by allegedly stealing the open-source code stored in virtual libraries,
known as repositories. This Article focuses on how this happens and whether
there is a solution that protects innovation and avoids years of litigation. We
also touch upon the array of issues raised by the relationship between AI and
copyright. Looking ahead, we propose the following: (a) immediate changes to
the licenses for open-source code created by developers that will limit access
and/or use of any open-source code to humans only; (b) we suggest revisions to
the Massachusetts Institute of Technology ("MIT") license so that AI systems
are required to procure appropriate licenses from open-source code developers,
which we believe will harmonize standards and build social consensus for the
benefit of all of humanity, rather than promote profit-driven centers of
innovation; (c) we call for urgent legislative action to protect the future of
AI systems while also promoting innovation; and (d) we propose a shift in the
burden of proof to AI systems in obfuscation cases.
Related papers
- OML: Open, Monetizable, and Loyal AI [39.63122342758896]
We propose OML, which stands for Open, Monetizable, and Loyal AI.
OML is an approach designed to democratize AI development.
Key innovation of our work is introducing a new scientific field: AI-native cryptography.
arXiv Detail & Related papers (2024-11-01T18:46:03Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Deception: A Survey of Examples, Risks, and Potential Solutions [20.84424818447696]
This paper argues that a range of current AI systems have learned how to deceive humans.
We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth.
arXiv Detail & Related papers (2023-08-28T17:59:35Z) - AI and the EU Digital Markets Act: Addressing the Risks of Bigness in
Generative AI [4.889410481341167]
This paper argues for integrating certain AI software as core platform services and classifying certain developers as gatekeepers under the DMA.
As the EU considers generative AI-specific rules and possible DMA amendments, this paper provides insights towards diversity and openness in generative AI services.
arXiv Detail & Related papers (2023-07-07T16:50:08Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Structured access to AI capabilities: an emerging paradigm for safe AI
deployment [0.0]
Instead of openly disseminating AI systems, developers facilitate controlled, arm's length interactions with their AI systems.
Aim is to prevent dangerous AI capabilities from being widely accessible, whilst preserving access to AI capabilities that can be used safely.
arXiv Detail & Related papers (2022-01-13T19:30:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.