Welzijn.AI: Developing Responsible Conversational AI for Elderly Care through Stakeholder Involvement
- URL: http://arxiv.org/abs/2502.07983v2
- Date: Thu, 24 Apr 2025 13:59:30 GMT
- Title: Welzijn.AI: Developing Responsible Conversational AI for Elderly Care through Stakeholder Involvement
- Authors: Bram van Dijk, Armel Lefebvre, Marco Spruit,
- Abstract summary: Welzijn.AI is a digital solution for monitoring (mental) well-being in elderly populations.<n>Three evaluations with different stakeholders were designed to disclose new perspectives on the strengths, weaknesses, design characteristics, and value requirements of Welzijn.AI.
- Score: 3.257656198821199
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Welzijn.AI as new digital solution for monitoring (mental) well-being in elderly populations, and illustrate how development of systems like Welzijn.AI can align with guidelines on responsible AI development. Three evaluations with different stakeholders were designed to disclose new perspectives on the strengths, weaknesses, design characteristics, and value requirements of Welzijn.AI. Evaluations concerned expert panels and involved patient federations, general practitioners, researchers, and the elderly themselves. Panels concerned interviews, a co-creation session, and feedback on a proof-of-concept implementation. Interview results were summarized in terms of Welzijn.AI's strengths, weaknesses, opportunities and threats. The co-creation session ranked a variety of value requirements of Welzijn.AI with the Hundred Dollar Method. User evaluation comprised analysing proportions of (dis)agreement on statements targeting Welzijn.AI's design characteristics, and ranking desired social characteristics. Experts in the panel interviews acknowledged Welzijn.AI's potential to combat loneliness and extract patterns from elderly behaviour. The proof-of-concept evaluation complemented the design characteristics most appealing to the elderly to potentially achieve this: empathetic and varying interactions. Stakeholders also link the technology to the implementation context: it could help activate an individual's social network, but support should also be available to empower users. Yet, non-elderly and elderly experts also disclose challenges in properly understanding the application; non-elderly experts also highlight issues concerning privacy. In sum, incorporating all stakeholder perspectives in system development remains challenging. Still, our results benefit researchers, policy makers, and health professionals that aim to improve elderly care with technology.
Related papers
- Envisioning an AI-Enhanced Mental Health Ecosystem [1.534667887016089]
We explore various AI applications in peer support, self-help interventions, proactive monitoring, and data-driven insights.
We propose a hybrid ecosystem where AI assists but does not replace human providers, emphasising responsible deployment and evaluation.
arXiv Detail & Related papers (2025-03-19T04:21:38Z) - Deep Learning-Based Facial Expression Recognition for the Elderly: A Systematic Review [0.5242869847419834]
The rapid aging of the global population has highlighted the need for technologies to support elderly.
Facial expression recognition (FER) systems offer a non-invasive means of monitoring emotional states.
This study presents a systematic review of deep learning-based FER systems, focusing on their applications for the elderly population.
arXiv Detail & Related papers (2025-02-04T11:05:24Z) - Toward Ethical AI: A Qualitative Analysis of Stakeholder Perspectives [0.0]
This study explores stakeholder perspectives on privacy in AI systems, focusing on educators, parents, and AI professionals.
Using qualitative analysis of survey responses from 227 participants, the research identifies key privacy risks, including data breaches, ethical misuse, and excessive data collection.
The findings provide actionable insights into balancing the benefits of AI with robust privacy protections.
arXiv Detail & Related papers (2025-01-23T02:06:25Z) - Cutting Through the Confusion and Hype: Understanding the True Potential of Generative AI [0.0]
This paper explores the nuanced landscape of generative AI (genAI)
It focuses on neural network-based models like Large Language Models (LLMs)
arXiv Detail & Related papers (2024-10-22T02:18:44Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice [63.20307830884542]
Next several decades may well be a turning point for humanity, comparable to the industrial revolution.
Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts.
We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.
arXiv Detail & Related papers (2024-04-06T22:18:31Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Analyzing Character and Consciousness in AI-Generated Social Content: A
Case Study of Chirper, the AI Social Network [0.0]
The study embarks on a comprehensive exploration of AI behavior, analyzing the effects of diverse settings on Chirper's responses.
Through a series of cognitive tests, the study gauges the self-awareness and pattern recognition prowess of Chirpers.
An intriguing aspect of the research is the exploration of the potential influence of a Chirper's handle or personality type on its performance.
arXiv Detail & Related papers (2023-08-30T15:40:18Z) - Towards FATE in AI for Social Media and Healthcare: A Systematic Review [0.0]
This survey focuses on the concepts of fairness, accountability, transparency, and ethics (FATE) within the context of AI.
We found that statistical and intersectional fairness can support fairness in healthcare on social media platforms.
While solutions like simulation, data analytics, and automated systems are widely used, their effectiveness can vary.
arXiv Detail & Related papers (2023-06-05T17:25:42Z) - Knowing About Knowing: An Illusion of Human Competence Can Hinder
Appropriate Reliance on AI Systems [13.484359389266864]
This paper addresses whether the Dunning-Kruger Effect (DKE) can hinder appropriate reliance on AI systems.
DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance.
We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems.
arXiv Detail & Related papers (2023-01-25T14:26:10Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - LioNets: A Neural-Specific Local Interpretation Technique Exploiting
Penultimate Layer Information [6.570220157893279]
Interpretable machine learning (IML) is an urgent topic of research.
This paper focuses on a local-based, neural-specific interpretation process applied to textual and time-series data.
arXiv Detail & Related papers (2021-04-13T09:39:33Z) - Expanding Explainability: Towards Social Transparency in AI systems [20.41177660318785]
Social Transparency (ST) is a socio-technically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making.
Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.
arXiv Detail & Related papers (2021-01-12T19:44:27Z) - The role of explainability in creating trustworthy artificial
intelligence for health care: a comprehensive survey of the terminology,
design choices, and evaluation strategies [1.2762298148425795]
Lack of transparency is identified as one of the main barriers to implementation of AI systems in health care.
We review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems.
We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice.
arXiv Detail & Related papers (2020-07-31T09:08:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.