How to build trust in answers given by Generative AI for specific, and vague, financial questions
- URL: http://arxiv.org/abs/2408.14593v1
- Date: Mon, 26 Aug 2024 19:26:48 GMT
- Title: How to build trust in answers given by Generative AI for specific, and vague, financial questions
- Authors: Alex Zarifis, Xusen Cheng,
- Abstract summary: Building trust for consumers is different when they ask a specific financial question in comparison to a vague one.
Four ways to build trust in both scenarios are (2) human oversight and being in the loop, (3) transparency and control, (4) accuracy and usefulness and finally (5) ease of use and support.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purpose: Generative artificial intelligence (GenAI) has progressed in its ability and has seen explosive growth in adoption. However, the consumer's perspective on its use, particularly in specific scenarios such as financial advice, is unclear. This research develops a model of how to build trust in the advice given by GenAI when answering financial questions. Design/methodology/approach: The model is tested with survey data using structural equation modelling (SEM) and multi-group analysis (MGA). The MGA compares two scenarios, one where the consumer makes a specific question and one where a vague question is made. Findings: This research identifies that building trust for consumers is different when they ask a specific financial question in comparison to a vague one. Humanness has a different effect in the two scenarios. When a financial question is specific, human-like interaction does not strengthen trust, while (1) when a question is vague, humanness builds trust. The four ways to build trust in both scenarios are (2) human oversight and being in the loop, (3) transparency and control, (4) accuracy and usefulness and finally (5) ease of use and support. Originality/value: This research contributes to a better understanding of the consumer's perspective when using GenAI for financial questions and highlights the importance of understanding GenAI in specific contexts from specific stakeholders.
Related papers
- What Makes a Good Natural Language Prompt? [72.3282960118995]
We conduct a meta-analysis surveying more than 150 prompting-related papers from leading NLP and AI conferences from 2022 to 2025.<n>We propose a property- and human-centric framework for evaluating prompt quality, encompassing 21 properties categorized into six dimensions.<n>We then empirically explore multi-property prompt enhancements in reasoning tasks, observing that single-property enhancements often have the greatest impact.
arXiv Detail & Related papers (2025-06-07T23:19:27Z) - GenAI vs. Human Fact-Checkers: Accurate Ratings, Flawed Rationales [2.3475022003300055]
GPT-4o, one of the most used AI models in consumer applications, outperforms other models, but all models exhibit only moderate agreement with human coders.
We also assess the effectiveness of summarized versus full content inputs, finding that summarized content holds promise for improving efficiency without sacrificing accuracy.
arXiv Detail & Related papers (2025-02-20T17:47:40Z) - Quriosity: Analyzing Human Questioning Behavior and Causal Inquiry through Curiosity-Driven Queries [91.70689724416698]
We present Quriosity, a collection of 13.5K naturally occurring questions from three diverse sources.
Our analysis reveals a significant presence of causal questions (up to 42%) in the dataset.
arXiv Detail & Related papers (2024-05-30T17:55:28Z) - Evaluating if trust and personal information privacy concerns are
barriers to using health insurance that explicitly utilizes AI [0.6138671548064355]
This research explores whether trust and privacy concern are barriers to the adoption of AI in health insurance.
Findings show that trust is significantly lower in the second scenario where AI is visible.
Privacy concerns are higher with AI but the difference is not statistically significant within the model.
arXiv Detail & Related papers (2024-01-20T15:02:56Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Decoding trust: A reinforcement learning perspective [11.04265850036115]
Behavioral experiments on the trust game have shown that trust and trustworthiness are universal among human beings.
We turn to the paradigm of reinforcement learning, where individuals update their strategies by evaluating the long-term return through accumulated experience.
In the pairwise scenario, we reveal that high levels of trust and trustworthiness emerge when individuals appreciate both their historical experience and returns in the future.
arXiv Detail & Related papers (2023-09-26T01:06:29Z) - GenAIPABench: A Benchmark for Generative AI-based Privacy Assistants [1.2642388972233845]
This study introduces GenAIPABench, a benchmark for evaluating Generative AI-based Privacy Assistants (GenAIPAs)
GenAIPABench includes: 1) A set of questions about privacy policies and data protection regulations, with annotated answers for various organizations and regulations; 2) Metrics to assess the accuracy, relevance, and consistency of responses; and 3) A tool for generating prompts to introduce privacy documents and varied privacy questions to test system robustness.
We evaluated three leading genAI systems ChatGPT-4, Bard, and Bing AI using GenAIPABench to gauge their effectiveness as GenAIPAs.
arXiv Detail & Related papers (2023-09-10T21:15:42Z) - Trust in AI and Its Role in the Acceptance of AI Technologies [12.175031903660972]
This paper explains the role of trust on the intention to use AI technologies.
Study 1 examined the role of trust in the use of AI voice assistants based on survey responses from college students.
Study 2, using data from a representative sample of the U.S. population, different dimensions of trust were examined.
arXiv Detail & Related papers (2022-03-23T19:18:19Z) - Trust in AI: Interpretability is not necessary or sufficient, while
black-box interaction is necessary and sufficient [0.0]
The problem of human trust in artificial intelligence is one of the most fundamental problems in applied machine learning.
We draw from statistical learning theory and sociological lenses on human-automation trust to motivate an AI-as-tool framework.
We clarify the role of interpretability in trust with a ladder of model access.
arXiv Detail & Related papers (2022-02-10T19:59:23Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Knowledge-driven Data Construction for Zero-shot Evaluation in
Commonsense Question Answering [80.60605604261416]
We propose a novel neuro-symbolic framework for zero-shot question answering across commonsense tasks.
We vary the set of language models, training regimes, knowledge sources, and data generation strategies, and measure their impact across tasks.
We show that, while an individual knowledge graph is better suited for specific tasks, a global knowledge graph brings consistent gains across different tasks.
arXiv Detail & Related papers (2020-11-07T22:52:21Z) - Insights into Fairness through Trust: Multi-scale Trust Quantification
for Financial Deep Learning [94.65749466106664]
A fundamental aspect of fairness that has not been explored in financial deep learning is the concept of trust.
We conduct multi-scale trust quantification on a deep neural network for the purpose of credit card default prediction.
arXiv Detail & Related papers (2020-11-03T19:05:07Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.