Envisioning Possibilities and Challenges of AI for Personalized Cancer Care
- URL: http://arxiv.org/abs/2408.10108v1
- Date: Mon, 19 Aug 2024 15:55:46 GMT
- Title: Envisioning Possibilities and Challenges of AI for Personalized Cancer Care
- Authors: Elaine Kong, Kuo-Ting, Huang, Aakash Gautam,
- Abstract summary: We identify critical gaps in current healthcare systems such as a lack of personalized care and insufficient cultural and linguistic accommodation.
AI, when applied to care, was seen as a way to address these issues by enabling real-time, culturally aligned, and linguistically appropriate interactions.
We also uncovered concerns about the implications of AI-driven personalization, such as data privacy, loss of human touch in caregiving, and the risk of echo chambers that limit exposure to diverse information.
- Score: 36.53434633571359
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of Artificial Intelligence (AI) in healthcare, including in caring for cancer survivors, has gained significant interest. However, gaps remain in our understanding of how such AI systems can provide care, especially for ethnic and racial minority groups who continue to face care disparities. Through interviews with six cancer survivors, we identify critical gaps in current healthcare systems such as a lack of personalized care and insufficient cultural and linguistic accommodation. AI, when applied to care, was seen as a way to address these issues by enabling real-time, culturally aligned, and linguistically appropriate interactions. We also uncovered concerns about the implications of AI-driven personalization, such as data privacy, loss of human touch in caregiving, and the risk of echo chambers that limit exposure to diverse information. We conclude by discussing the trade-offs between AI-enhanced personalization and the need for structural changes in healthcare that go beyond technological solutions, leading us to argue that we should begin by asking, ``Why personalization?''
Related papers
- Towards Clinical AI Fairness: Filling Gaps in the Puzzle [15.543248260582217]
This review systematically pinpoints several deficiencies concerning both healthcare data and the provided AI fairness solutions.
We highlight the scarcity of research on AI fairness in many medical domains where AI technology is increasingly utilized.
To bridge these gaps, our review advances actionable strategies for both the healthcare and AI research communities.
arXiv Detail & Related papers (2024-05-28T07:42:55Z) - Emotional Intelligence Through Artificial Intelligence : NLP and Deep Learning in the Analysis of Healthcare Texts [1.9374282535132377]
This manuscript presents a methodical examination of the utilization of Artificial Intelligence in the assessment of emotions in texts related to healthcare.
We scrutinize numerous research studies that employ AI to augment sentiment analysis, categorize emotions, and forecast patient outcomes.
There persist challenges, which encompass ensuring the ethical application of AI, safeguarding patient confidentiality, and addressing potential biases in algorithmic procedures.
arXiv Detail & Related papers (2024-03-14T15:58:13Z) - Ensuring Trustworthy Medical Artificial Intelligence through Ethical and
Philosophical Principles [4.705984758887425]
AI-based computer-assisted diagnosis and treatment tools can democratize healthcare by matching the clinical level or surpassing clinical experts.
The democratization of such AI tools can reduce the cost of care, optimize resource allocation, and improve the quality of care.
integrating AI into healthcare raises several ethical and philosophical concerns, such as bias, transparency, autonomy, responsibility, and accountability.
arXiv Detail & Related papers (2023-04-23T04:14:18Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Holding AI to Account: Challenges for the Delivery of Trustworthy AI in
Healthcare [8.351355707564153]
We examine the problem of trustworthy AI and explore what delivering this means in practice.
We argue here that this overlooks the important part played by organisational accountability in how people reason about and trust AI in socio-technical settings.
arXiv Detail & Related papers (2022-11-29T18:22:23Z) - Investigating the Potential of Artificial Intelligence Powered
Interfaces to Support Different Types of Memory for People with Dementia [22.89233407347665]
One of the most difficult challenges to address is supporting the fluctuating accessibility needs of people with dementia.
We present future directions for the design of AI-based systems to personalize an interface for dementia-related changes in different types of memory.
arXiv Detail & Related papers (2022-11-19T17:31:45Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.