A Vision to Enhance Trust Requirements for Peer Support Systems by Revisiting Trust Theories
- URL: http://arxiv.org/abs/2407.11197v1
- Date: Thu, 6 Jun 2024 02:21:11 GMT
- Title: A Vision to Enhance Trust Requirements for Peer Support Systems by Revisiting Trust Theories
- Authors: Yasaman Gheidar, Lysanne Lessard, Yao Yao,
- Abstract summary: This vision paper focuses on the mental health crisis impacting healthcare workers (HCWs)
The study proposes a novel approach to elicit perceptual trust requirements by proposing a trust framework anchored in recognized trust theories.
- Score: 3.4971302832462476
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This vision paper focuses on the mental health crisis impacting healthcare workers (HCWs), which exacerbated by the COVID-19 pandemic, leads to increased stress and psychological issues like burnout. Peer Support Programs (PSP) are a recognized intervention for mitigating these issues. These programs are increasingly being delivered virtually through Peer Support Systems (PSS) for increased convenience and accessibility. However, HCWs perception of these systems results in fear of information sharing, perceived lack of safety, and low participation rate, which challenges these systems ability to achieve their goals. In line with the rich body of research on the requirements and properties of trustworthy systems, we posit that increasing HCWs trust in PSS could address these challenges. However, extant research focuses on objectively defined trustworthiness rather than perceptual trust because trustworthy requirements are viewed as more controllable and easier to operationalize. This study proposes a novel approach to elicit perceptual trust requirements by proposing a trust framework anchored in recognized trust theories from different disciplines that unpacks trust into its recognized types and their antecedents. This approach allows the identification of trust requirements beyond those already proposed for trustworthy systems, providing a strong foundation for improving the effectiveness of PSS for HCWs. Keywords: Trust Requirements, Requirements elicitation, Peer support systems, Healthcare workers
Related papers
- When to Trust LLMs: Aligning Confidence with Response Quality [49.371218210305656]
We propose CONfidence-Quality-ORDer-preserving alignment approach (CONQORD)
It integrates quality reward and order-preserving alignment reward functions.
Experiments demonstrate that CONQORD significantly improves the alignment performance between confidence and response accuracy.
arXiv Detail & Related papers (2024-04-26T09:42:46Z) - Towards a Participatory and Social Justice-Oriented Measure of
Human-Robot Trust [0.0]
This paper proposes a participatory and social justice-oriented approach for the design and evaluation of a trust measure.
The process would prioritize that community's needs and unique circumstances to produce a trust measure that accurately reflects the factors that impact their trust in a robot.
arXiv Detail & Related papers (2024-02-24T01:04:19Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - A Systematic Review on Fostering Appropriate Trust in Human-AI
Interaction [19.137907393497848]
Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners.
Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communication.
This paper presents a systematic review to identify current practices in building appropriate trust, different ways to measure it, types of tasks used, and potential challenges associated with it.
arXiv Detail & Related papers (2023-11-08T12:19:58Z) - Trust-based Consensus in Multi-Agent Reinforcement Learning Systems [5.778852464898369]
This paper investigates the problem of unreliable agents in multi-agent reinforcement learning (MARL)
We propose Reinforcement Learning-based Trusted Consensus (RLTC), a decentralized trust mechanism.
We empirically demonstrate that our trust mechanism is able to handle unreliable agents effectively, as evidenced by higher consensus success rates.
arXiv Detail & Related papers (2022-05-25T15:58:34Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - On the Relation of Trust and Explainability: Why to Engineer for
Trustworthiness [0.0]
One of the primary motivators for such requirements is that explainability is expected to facilitate stakeholders' trust in a system.
Recent psychological studies indicate that explanations do not necessarily facilitate trust.
We argue that even though trustworthiness does not automatically lead to trust, there are several reasons to engineer primarily for trustworthiness.
arXiv Detail & Related papers (2021-08-11T18:02:08Z) - Reliability Testing for Natural Language Processing Systems [14.393308846231083]
We argue for the need for reliability testing and contextualize it among existing work on improving accountability.
We show how adversarial attacks can be reframed for this goal, via a framework for developing reliability tests.
arXiv Detail & Related papers (2021-05-06T11:24:58Z) - Insights into Fairness through Trust: Multi-scale Trust Quantification
for Financial Deep Learning [94.65749466106664]
A fundamental aspect of fairness that has not been explored in financial deep learning is the concept of trust.
We conduct multi-scale trust quantification on a deep neural network for the purpose of credit card default prediction.
arXiv Detail & Related papers (2020-11-03T19:05:07Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.