Agency in the Age of AI
- URL: http://arxiv.org/abs/2502.00648v1
- Date: Sun, 02 Feb 2025 03:27:19 GMT
- Title: Agency in the Age of AI
- Authors: Samarth Swarup,
- Abstract summary: generative AI tools are capable of generating ever more realistic text, images, and videos, and functional code, from minimal prompts.
There is increasing alarm about the misuses to which these tools can be put, and the intentional and unintentional harms to individuals and society that may result.
We argue that emphagency is the appropriate lens to study these harms and benefits, but that doing so will require advancement in the theory of agency.
- Score: 1.0878040851638
- License:
- Abstract: There is significant concern about the impact of generative AI on society. Modern AI tools are capable of generating ever more realistic text, images, and videos, and functional code, from minimal prompts. Accompanying this rise in ability and usability, there is increasing alarm about the misuses to which these tools can be put, and the intentional and unintentional harms to individuals and society that may result. In this paper, we argue that \emph{agency} is the appropriate lens to study these harms and benefits, but that doing so will require advancement in the theory of agency, and advancement in how this theory is applied in (agent-based) models.
Related papers
- Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Killer Apps: Low-Speed, Large-Scale AI Weapons [2.2899177316144943]
Artificial Intelligence (AI) and Machine Learning (ML) advancements present new challenges and opportunities in warfare and security.
This paper explores the concept of AI weapons, their deployment, detection, and potential countermeasures.
arXiv Detail & Related papers (2024-01-14T12:09:40Z) - Generative AI in Writing Research Papers: A New Type of Algorithmic Bias
and Uncertainty in Scholarly Work [0.38850145898707145]
Large language models (LLMs) and generative AI tools present challenges in identifying and addressing biases.
generative AI tools are susceptible to goal misgeneralization, hallucinations, and adversarial attacks such as red teaming prompts.
We find that incorporating generative AI in the process of writing research manuscripts introduces a new type of context-induced algorithmic bias.
arXiv Detail & Related papers (2023-12-04T04:05:04Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - Amplifying Limitations, Harms and Risks of Large Language Models [1.0152838128195467]
We present this article as a small gesture in an attempt to counter what appears to be exponentially growing hype around Artificial Intelligence.
It may also help those outside of the field to become more informed about some of the limitations of AI technology.
arXiv Detail & Related papers (2023-07-06T11:53:45Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Redefining Relationships in Music [55.478320310047785]
We argue that AI tools will fundamentally reshape our music culture.
People working in this space could decrease the possible negative impacts on the practice, consumption and meaning of music.
arXiv Detail & Related papers (2022-12-13T19:44:32Z) - An Ethical Framework for Guiding the Development of Affectively-Aware
Artificial Intelligence [0.0]
We propose guidelines for evaluating the (moral and) ethical consequences of affectively-aware AI.
We propose a multi-stakeholder analysis framework that separates the ethical responsibilities of AI Developers vis-a-vis the entities that deploy such AI.
We end with recommendations for researchers, developers, operators, as well as regulators and law-makers.
arXiv Detail & Related papers (2021-07-29T03:57:53Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Dynamic Cognition Applied to Value Learning in Artificial Intelligence [0.0]
Several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence.
It is of utmost importance that artificial intelligent agents have their values aligned with human values.
A possible approach to this problem would be to use theoretical models such as SED.
arXiv Detail & Related papers (2020-05-12T03:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.