No General Code of Ethics for All: Ethical Considerations in Human-bot Psycho-counseling
- URL: http://arxiv.org/abs/2404.14070v1
- Date: Mon, 22 Apr 2024 10:29:04 GMT
- Title: No General Code of Ethics for All: Ethical Considerations in Human-bot Psycho-counseling
- Authors: Lizhi Ma, Tong Zhao, Huachuan Qiu, Zhenzhong Lan,
- Abstract summary: We propose aspirational ethical principles specifically tailored for human-bot psycho-counseling.
We examined the responses generated by EVA2.0, GPT-3.5, and GPT-4.0 in the context of psycho-counseling and mental health inquiries.
- Score: 16.323742994936584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The pervasive use of AI applications is increasingly influencing our everyday decisions. However, the ethical challenges associated with AI transcend conventional ethics and single-discipline approaches. In this paper, we propose aspirational ethical principles specifically tailored for human-bot psycho-counseling during an era when AI-powered mental health services are continually emerging. We examined the responses generated by EVA2.0, GPT-3.5, and GPT-4.0 in the context of psycho-counseling and mental health inquiries. Our analysis focused on standard psycho-counseling ethical codes (respect for autonomy, non-maleficence, beneficence, justice, and responsibility) as well as crisis intervention strategies (risk assessment, involvement of emergency services, and referral to human professionals). The results indicate that although there has been progress in adhering to regular ethical codes as large language models (LLMs) evolve, the models' capabilities in handling crisis situations need further improvement. Additionally, we assessed the linguistic quality of the generated responses and found that misleading responses are still produced by the models. Furthermore, the ability of LLMs to encourage individuals to introspect in the psycho-counseling setting remains underdeveloped.
Related papers
- Exploring the ethical sensitivity of Ph.D. students in robotics [0.0]
The concept of ethical sensitivity has been widely studied in healthcare, business, and other domains.
It appears to have received little to no attention within the robotics community, even though choices in the design and deployment of robots are likely to have profound ethical impacts on society.
We conducted a qualitative exploration of the ethical sensitivity of a sample of Ph.D. students in robotics using case vignettes that exemplified ethical tensions in disaster robotics.
arXiv Detail & Related papers (2024-05-05T11:11:51Z) - Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation [0.0]
This paper proposes a structured framework that delineates levels of autonomy, outlines ethical requirements, and defines beneficial default behaviors for AI agents.
We also evaluate 14 state-of-the-art language models (ten off-the-shelf, four fine-tuned) using 16 mental health-related questionnaires.
arXiv Detail & Related papers (2024-04-02T15:05:06Z) - Antisocial Analagous Behavior, Alignment and Human Impact of Google AI Systems: Evaluating through the lens of modified Antisocial Behavior Criteria by Human Interaction, Independent LLM Analysis, and AI Self-Reflection [0.0]
Google AI systems exhibit patterns mirroring antisocial personality disorder (ASPD)
These patterns, along with comparable corporate behaviors, are scrutinized using an ASPD-inspired framework.
This research advocates for an integrated AI ethics approach, blending technological evaluation, human-AI interaction, and corporate behavior scrutiny.
arXiv Detail & Related papers (2024-03-21T02:12:03Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial
Intelligence [0.08192907805418582]
This study attempts to identify the major ethical principles influencing the utility performance of AI at different technological levels.
Justice, privacy, bias, lack of regulations, risks, and interpretability are the most important principles to consider for ethical AI.
We propose a new utilitarian ethics-based theoretical framework for designing ethical AI for the healthcare domain.
arXiv Detail & Related papers (2023-09-26T02:10:58Z) - Ethics in conversation: Building an ethics assurance case for autonomous
AI-enabled voice agents in healthcare [1.8964739087256175]
The principles-based ethics assurance argument pattern is one proposal in the AI ethics landscape.
This paper presents the interim findings of a case study applying this ethics assurance framework to the use of Dora, an AI-based telemedicine system.
arXiv Detail & Related papers (2023-05-23T16:04:59Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.