P4AI: Approaching AI Ethics through Principlism
- URL: http://arxiv.org/abs/2111.14062v1
- Date: Sun, 28 Nov 2021 06:25:49 GMT
- Title: P4AI: Approaching AI Ethics through Principlism
- Authors: Andre Fu and Elisa Ding and Mahdi S. Hosseini and Konstantinos N.
Plataniotis
- Abstract summary: We outline a novel ethical framework, textitP4AI: Principlism for AI, an augmented principlistic view of ethical dilemmas within AI.
We suggest using P4AI to make concrete recommendations to the community to mitigate the climate and privacy crises.
- Score: 34.741570387332764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of computer vision is rapidly evolving, particularly in the context
of new methods of neural architecture design. These models contribute to (1)
the Climate Crisis - increased CO2 emissions and (2) the Privacy Crisis - data
leakage concerns. To address the often overlooked impact the Computer Vision
(CV) community has on these crises, we outline a novel ethical framework,
\textit{P4AI}: Principlism for AI, an augmented principlistic view of ethical
dilemmas within AI. We then suggest using P4AI to make concrete recommendations
to the community to mitigate the climate and privacy crises.
Related papers
- Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - Exposing the Impact of GenAI for Cybercrime: An Investigation into the Dark Side [1.0613539657019528]
generative AI models have sparked significant debate over safety, ethical risks, and dual-use concerns.<n>This paper provides empirical evidence regarding generative AI's association with malicious internet-related activities and cybercrime.
arXiv Detail & Related papers (2025-05-29T17:57:01Z) - From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate [69.05573887799203]
Much of this debate has concentrated on direct impact without addressing the significant indirect effects.
This paper examines how the problem of Jevons' Paradox applies to AI, whereby efficiency gains may paradoxically spur increased consumption.
We argue that understanding these second-order impacts requires an interdisciplinary approach, combining lifecycle assessments with socio-economic analyses.
arXiv Detail & Related papers (2025-01-27T22:45:06Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Mapping the individual, social, and biospheric impacts of Foundation Models [0.39843531413098965]
This paper offers a critical framework to account for the social, political, and environmental dimensions of foundation models and generative AI.
We identify 14 categories of risks and harms and map them according to their individual, social, and biospheric impacts.
arXiv Detail & Related papers (2024-07-24T10:05:40Z) - Generative AI and the problem of existential risk [0.0]
Generative AI has been a focal point for concerns about AI's perceived existential risk.
This chapter aims to demystify the debate by highlighting the key worries that underpin existential risk fears in relation to generative AI.
arXiv Detail & Related papers (2024-07-18T10:16:24Z) - AI Governance and Accountability: An Analysis of Anthropic's Claude [0.0]
This paper examines the AI governance landscape, focusing on Anthropic's Claude, a foundational AI model.
We analyze Claude through the lens of the NIST AI Risk Management Framework and the EU AI Act, identifying potential threats and proposing mitigation strategies.
arXiv Detail & Related papers (2024-05-02T23:37:06Z) - Survey on AI Ethics: A Socio-technical Perspective [0.9374652839580183]
Ethical concerns associated with AI are multifaceted, including challenging issues of fairness, privacy and data protection, responsibility and accountability, safety and robustness, transparency and explainability, and environmental impact.
This work unifies the current and future ethical concerns of deploying AI into society.
arXiv Detail & Related papers (2023-11-28T21:00:56Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Are You Worthy of My Trust?: A Socioethical Perspective on the Impacts
of Trustworthy AI Systems on the Environment and Human Society [0.47138177023764666]
We offer a brief, high-level overview of societal impacts of AI systems.
We highlight the requirement of multi-disciplinary governance and convergence throughout its lifecycle.
arXiv Detail & Related papers (2023-09-18T03:07:47Z) - A Pathway Towards Responsible AI Generated Content [68.13835802977125]
We focus on 8 main concerns that may hinder the healthy development and deployment of AIGC in practice.
These concerns include risks from (1) privacy; (2) bias, toxicity, misinformation; (3) intellectual property (IP); (4) robustness; (5) open source and explanation; (6) technology abuse; (7) consent, credit, and compensation; (8) environment.
arXiv Detail & Related papers (2023-03-02T14:58:40Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Reconsidering CO2 emissions from Computer Vision [39.04604349338802]
We analyze the total cost of CO2 emissions by breaking it into (1) the architecture creation cost and (2) the life-time evaluation cost.
We show that over time, these costs are non-negligible and are having a direct impact on our future.
We propose adding "enforcement" as a pillar of ethical AI and provide some recommendations for how architecture designers and broader CV community can curb the climate crisis.
arXiv Detail & Related papers (2021-04-18T04:01:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.