Amplifying Limitations, Harms and Risks of Large Language Models
- URL: http://arxiv.org/abs/2307.04821v1
- Date: Thu, 6 Jul 2023 11:53:45 GMT
- Title: Amplifying Limitations, Harms and Risks of Large Language Models
- Authors: Michael O'Neill and Mark Connor
- Abstract summary: We present this article as a small gesture in an attempt to counter what appears to be exponentially growing hype around Artificial Intelligence.
It may also help those outside of the field to become more informed about some of the limitations of AI technology.
- Score: 1.0152838128195467
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present this article as a small gesture in an attempt to counter what
appears to be exponentially growing hype around Artificial Intelligence (AI)
and its capabilities, and the distraction provided by the associated talk of
science-fiction scenarios that might arise if AI should become sentient and
super-intelligent. It may also help those outside of the field to become more
informed about some of the limitations of AI technology. In the current context
of popular discourse AI defaults to mean foundation and large language models
(LLMs) such as those used to create ChatGPT. This in itself is a
misrepresentation of the diversity, depth and volume of research, researchers,
and technology that truly represents the field of AI. AI being a field of
research that has existed in software artefacts since at least the 1950's. We
set out to highlight a number of limitations of LLMs, and in so doing highlight
that harms have already arisen and will continue to arise due to these
limitations. Along the way we also highlight some of the associated risks for
individuals and organisations in using this technology.
Related papers
- A Survey on Offensive AI Within Cybersecurity [1.8206461789819075]
This survey paper on offensive AI will comprehensively cover various aspects related to attacks against and using AI systems.
It will delve into the impact of offensive AI practices on different domains, including consumer, enterprise, and public digital infrastructure.
The paper will explore adversarial machine learning, attacks against AI models, infrastructure, and interfaces, along with offensive techniques like information gathering, social engineering, and weaponized AI.
arXiv Detail & Related papers (2024-09-26T17:36:22Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Generative AI in Writing Research Papers: A New Type of Algorithmic Bias
and Uncertainty in Scholarly Work [0.38850145898707145]
Large language models (LLMs) and generative AI tools present challenges in identifying and addressing biases.
generative AI tools are susceptible to goal misgeneralization, hallucinations, and adversarial attacks such as red teaming prompts.
We find that incorporating generative AI in the process of writing research manuscripts introduces a new type of context-induced algorithmic bias.
arXiv Detail & Related papers (2023-12-04T04:05:04Z) - Towards Possibilities & Impossibilities of AI-generated Text Detection:
A Survey [97.33926242130732]
Large Language Models (LLMs) have revolutionized the domain of natural language processing (NLP) with remarkable capabilities of generating human-like text responses.
Despite these advancements, several works in the existing literature have raised serious concerns about the potential misuse of LLMs.
To address these concerns, a consensus among the research community is to develop algorithmic solutions to detect AI-generated text.
arXiv Detail & Related papers (2023-10-23T18:11:32Z) - Factuality Challenges in the Era of Large Language Models [113.3282633305118]
Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
arXiv Detail & Related papers (2023-10-08T14:55:02Z) - Voluminous yet Vacuous? Semantic Capital in an Age of Large Language
Models [0.0]
Large Language Models (LLMs) have emerged as transformative forces in the realm of natural language processing, wielding the power to generate human-like text.
This paper explores the evolution, capabilities, and limitations of these models, while highlighting ethical concerns they raise.
arXiv Detail & Related papers (2023-05-29T09:26:28Z) - Mathematics, word problems, common sense, and artificial intelligence [0.0]
We discuss the capacities and limitations of current artificial intelligence (AI) technology to solve word problems that combine elementary knowledge with commonsense reasoning.
We review three approaches that have been developed, using AI natural language technology.
We argue that it is not clear whether these kinds of limitations will be important in developing AI technology for pure mathematical research.
arXiv Detail & Related papers (2023-01-23T21:21:39Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.