Conceptualising Healthcare-Seeking as an Activity to Explain Technology
Use: A Case of M-health
- URL: http://arxiv.org/abs/2108.10082v1
- Date: Mon, 23 Aug 2021 11:28:21 GMT
- Title: Conceptualising Healthcare-Seeking as an Activity to Explain Technology
Use: A Case of M-health
- Authors: Karen Sowon and Wallace Chigona
- Abstract summary: We propose the conceptualisation of healthcare-seeking as an activity to offer a richer explanation of technology utilisation.
This is an interpretivist study drawing on Activity Theory to conceptualise healthcare-seeking as the minimum context needed to explicate use.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The purpose of this paper is to engage with the Information Systems' contexts
of use as a means to explain nuanced human-technology interaction. In this
paper, we specifically propose the conceptualisation of healthcare-seeking as
an activity to offer a richer explanation of technology utilisation. This is an
interpretivist study drawing on Activity Theory to conceptualise
healthcare-seeking as the minimum context needed to explicate use. A framework
of the core aspects of AT is used to analyse one empirical mHealth case from a
Kenyan context thus illustrating how AT can be applied to study technology use.
The paper explicates technology use by explaining various utilisation behaviour
that may emerge in a complex human-technology interaction context; ranging from
a complex adoption process to mechanisms to determine continuance that
differentiate trust in the intervention from trust in the information, and
potential technology coping strategies. The paper is a novel attempt to
operationalise AT to study technology use. It thus offers a broader explication
of use while providing insights for design and implementation made possible by
the conceptualisation of healthcare-seeking as an activity. Such insights may
be useful in the design of patient-centred systems.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - Exploration of Attention Mechanism-Enhanced Deep Learning Models in the Mining of Medical Textual Data [3.22071437711162]
The research explores the utilization of a deep learning model employing an attention mechanism in medical text mining.
It aims to enhance the model's capability to identify essential medical information by incorporating deep learning and attention mechanisms.
arXiv Detail & Related papers (2024-05-23T00:20:14Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - An Interactive Interpretability System for Breast Cancer Screening with
Deep Learning [11.28741778902131]
We propose an interactive system to take advantage of state-of-the-art interpretability techniques to assist radiologists with breast cancer screening.
Our system integrates a deep learning model into the radiologists' workflow and provides novel interactions to promote understanding of the model's decision-making process.
arXiv Detail & Related papers (2022-09-30T02:19:49Z) - Align, Reason and Learn: Enhancing Medical Vision-and-Language
Pre-training with Knowledge [68.90835997085557]
We propose a systematic and effective approach to enhance structured medical knowledge from three perspectives.
First, we align the representations of the vision encoder and the language encoder through knowledge.
Second, we inject knowledge into the multi-modal fusion model to enable the model to perform reasoning using knowledge as the supplementation of the input image and text.
Third, we guide the model to put emphasis on the most critical information in images and texts by designing knowledge-induced pretext tasks.
arXiv Detail & Related papers (2022-09-15T08:00:01Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Representation Learning for Networks in Biology and Medicine:
Advancements, Challenges, and Opportunities [18.434430658837258]
We have witnessed a rapid expansion of representation learning techniques into modeling, analysis, and learning with networks.
In this review, we put forward an observation that long-standing principles of network biology and medicine can provide the conceptual grounding for representation learning.
We synthesize a spectrum of algorithmic approaches that leverage topological features to embed networks into compact vector spaces.
arXiv Detail & Related papers (2021-04-11T00:20:00Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.