Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground
- URL: http://arxiv.org/abs/2412.05130v2
- Date: Sun, 12 Jan 2025 10:21:18 GMT
- Title: Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground
- Authors: Alexander Martin Mussgnug,
- Abstract summary: I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.
I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
- Score: 55.2480439325792
- License:
- Abstract: Recent research illustrates how AI can be developed and deployed in a manner detached from the concrete social context of application. By abstracting from the contexts of AI application, practitioners also disengage from the distinct normative structures that govern them. Building upon Helen Nissenbaum's framework of contextual integrity, I illustrate how disregard for contextual norms can threaten the integrity of a context with often decisive ethical implications. I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms. Echoing a persistent undercurrent in technology ethics of understanding emerging technologies as uncharted moral territory, certain approaches to AI ethics can promote a notion of AI as a novel and distinct realm for ethical deliberation, norm setting, and virtue cultivation. This narrative of AI as new ethical ground, however, can come at the expense of practitioners, policymakers and ethicists engaging with already established norms and virtues that were gradually cultivated to promote successful and responsible practice within concrete social contexts. In response, I question the current narrow prioritization in AI ethics of moral innovation over moral preservation. Engaging also with emerging foundation models, I advocate for a moderately conservative approach to the ethics of AI that prioritizes the responsible and considered integration of AI within established social contexts and their respective normative structures.
Related papers
- Delegating Responsibilities to Intelligent Autonomous Systems: Challenges and Benefits [1.7205106391379026]
As AI systems operate with autonomy and adaptability, the traditional boundaries of moral responsibility in techno-social systems are being challenged.
This paper explores the evolving discourse on the delegation of responsibilities to intelligent autonomous agents and the ethical implications of such practices.
arXiv Detail & Related papers (2024-11-06T18:40:38Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - German AI Start-Ups and AI Ethics: Using A Social Practice Lens for
Assessing and Implementing Socio-Technical Innovation [0.0]
This paper introduces a practice-based approach for understanding ethical AI.
We present empirical findings from our study on the operationalization of ethics in German AI start-ups.
We suggest that ethical AI practices can be broken down into principles, needs, narratives, materializations, and cultural genealogies.
arXiv Detail & Related papers (2022-06-20T19:44:39Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Ethics as a service: a pragmatic operationalisation of AI Ethics [1.1083289076967895]
gap exists between theory of AI ethics principles and the practical design of AI systems.
This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited.
arXiv Detail & Related papers (2021-02-11T21:29:25Z) - An Ecosystem Approach to Ethical AI and Data Use: Experimental
Reflections [0.0]
This paper offers a methodology to identify the needs of AI practitioners when it comes to confronting and resolving ethical challenges.
We offer a grassroots approach to operational ethics based on dialog and mutualised responsibility.
arXiv Detail & Related papers (2020-12-27T07:41:26Z) - AI virtues -- The missing link in putting AI ethics into practice [0.0]
The paper defines four basic AI virtues, namely justice, honesty, responsibility and care.
It defines two second-order AI virtues, prudence and fortitude, that bolster achieving the basic virtues.
arXiv Detail & Related papers (2020-11-25T14:14:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.