Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s)
- URL: http://arxiv.org/abs/2501.11705v1
- Date: Mon, 20 Jan 2025 19:38:21 GMT
- Title: Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s)
- Authors: Brian E. Perron, Lauri Goldkind, Zia Qi, Bryan G. Victor,
- Abstract summary: Authors argue that ethical concerns about AI deployment vary significantly based on implementation context and specific use cases.
They propose a dimensional risk assessment approach that considers factors like data sensitivity, professional oversight requirements, and potential impact on client wellbeing.
- Score: 0.0
- License:
- Abstract: This paper examines the responsible integration of artificial intelligence (AI) in human services organizations (HSOs), proposing a nuanced framework for evaluating AI applications across multiple dimensions of risk. The authors argue that ethical concerns about AI deployment -- including professional judgment displacement, environmental impact, model bias, and data laborer exploitation -- vary significantly based on implementation context and specific use cases. They challenge the binary view of AI adoption, demonstrating how different applications present varying levels of risk that can often be effectively managed through careful implementation strategies. The paper highlights promising solutions, such as local large language models, that can facilitate responsible AI integration while addressing common ethical concerns. The authors propose a dimensional risk assessment approach that considers factors like data sensitivity, professional oversight requirements, and potential impact on client wellbeing. They conclude by outlining a path forward that emphasizes empirical evaluation, starting with lower-risk applications and building evidence-based understanding through careful experimentation. This approach enables organizations to maintain high ethical standards while thoughtfully exploring how AI might enhance their capacity to serve clients and communities effectively.
Related papers
- Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.
In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.
Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - An Empirical Study on Decision-Making Aspects in Responsible Software Engineering for AI [5.564793925574796]
This study investigates the ethical challenges and complexities inherent in responsible software engineering (RSE) for AI.
Personal values, emerging roles, and awareness of AIs societal impact influence responsible decision-making in RSE for AI.
arXiv Detail & Related papers (2025-01-26T22:38:04Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure [4.578401882034969]
We focus on how model performance evaluation may inform or inhibit probing of model limitations, biases, and other risks.
Our findings can inform AI providers and legal scholars in designing interventions and policies that preserve open-source innovation while incentivizing ethical uptake.
arXiv Detail & Related papers (2024-09-27T19:09:40Z) - Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - Ethics in conversation: Building an ethics assurance case for autonomous
AI-enabled voice agents in healthcare [1.8964739087256175]
The principles-based ethics assurance argument pattern is one proposal in the AI ethics landscape.
This paper presents the interim findings of a case study applying this ethics assurance framework to the use of Dora, an AI-based telemedicine system.
arXiv Detail & Related papers (2023-05-23T16:04:59Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [7.35411010153049]
Best way to reduce risks is to implement comprehensive AI lifecycle governance.
Risks can be quantified using metrics from the technical community.
This paper explores these issues, focusing on the opportunities, challenges, and potential impacts of such an approach.
arXiv Detail & Related papers (2022-09-13T21:47:25Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.