Evaluating if trust and personal information privacy concerns are
barriers to using health insurance that explicitly utilizes AI
- URL: http://arxiv.org/abs/2401.11249v1
- Date: Sat, 20 Jan 2024 15:02:56 GMT
- Title: Evaluating if trust and personal information privacy concerns are
barriers to using health insurance that explicitly utilizes AI
- Authors: Alex Zarifis, Peter Kawalek and Aida Azadegan
- Abstract summary: This research explores whether trust and privacy concern are barriers to the adoption of AI in health insurance.
Findings show that trust is significantly lower in the second scenario where AI is visible.
Privacy concerns are higher with AI but the difference is not statistically significant within the model.
- Score: 0.6138671548064355
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trust and privacy have emerged as significant concerns in online
transactions. Sharing information on health is especially sensitive but it is
necessary for purchasing and utilizing health insurance. Evidence shows that
consumers are increasingly comfortable with technology in place of humans, but
the expanding use of AI potentially changes this. This research explores
whether trust and privacy concern are barriers to the adoption of AI in health
insurance. Two scenarios are compared: The first scenario has limited AI that
is not in the interface and its presence is not explicitly revealed to the
consumer. In the second scenario there is an AI interface and AI evaluation,
and this is explicitly revealed to the consumer. The two scenarios were modeled
and compared using SEM PLS-MGA. The findings show that trust is significantly
lower in the second scenario where AI is visible. Privacy concerns are higher
with AI but the difference is not statistically significant within the model.
Related papers
- The Dual Impact of Artificial Intelligence in Healthcare: Balancing Advancements with Ethical and Operational Challenges [1.3302498881305604]
This paper takes a close look at how AI is transforming areas such as diagnostics, precision medicine, and drug discovery.
Issues like patient privacy, safety, and the fairness of AI decisions are explored to understand whether AI in healthcare is a positive force, a potential risk, or perhaps both.
arXiv Detail & Related papers (2024-11-08T23:36:16Z) - Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI [1.325665193924634]
This paper takes a critical approach towards examining the intricacies of these issues within AI companion services.
We analyze articles from public media about the company and its practices to gain insight into the trustworthiness of information provided in the policy.
The results reveal despite privacy notices, data collection practices might harvest personal data without users' full awareness.
arXiv Detail & Related papers (2024-11-07T07:36:19Z) - Ethical AI in Retail: Consumer Privacy and Fairness [0.0]
The adoption of artificial intelligence (AI) in retail has significantly transformed the industry, enabling more personalized services and efficient operations.
However, the rapid implementation of AI technologies raises ethical concerns, particularly regarding consumer privacy and fairness.
This study aims to analyze the ethical challenges of AI applications in retail, explore ways retailers can implement AI technologies ethically while remaining competitive, and provide recommendations on ethical AI practices.
arXiv Detail & Related papers (2024-10-20T12:00:14Z) - Trust in AI: Progress, Challenges, and Future Directions [6.724854390957174]
The increasing use of artificial intelligence (AI) systems in our daily life explains the significance of trust/distrust in AI from a user perspective.
Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion.
arXiv Detail & Related papers (2024-03-12T20:26:49Z) - Reconciling AI Performance and Data Reconstruction Resilience for
Medical Imaging [52.578054703818125]
Artificial Intelligence (AI) models are vulnerable to information leakage of their training data, which can be highly sensitive.
Differential Privacy (DP) aims to circumvent these susceptibilities by setting a quantifiable privacy budget.
We show that using very large privacy budgets can render reconstruction attacks impossible, while drops in performance are negligible.
arXiv Detail & Related papers (2023-12-05T12:21:30Z) - Advances in Automatically Rating the Trustworthiness of Text Processing
Services [9.696492590163016]
AI services are known to have unstable behavior when subjected to changes in data, models or users.
The current approach of assessing AI services in a black box setting, where the consumer does not have access to the AI's source code or training data, is limited.
Our approach is inspired by the success of nutritional labeling in food industry to promote health and seeks to assess and rate AI services for trust from the perspective of an independent stakeholder.
arXiv Detail & Related papers (2023-02-04T14:27:46Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.