Representational Ethical Model Calibration
- URL: http://arxiv.org/abs/2207.12043v2
- Date: Tue, 18 Oct 2022 22:03:24 GMT
- Title: Representational Ethical Model Calibration
- Authors: Robert Carruthers, Isabel Straw, James K Ruffle, Daniel Herron, Amy
Nelson, Danilo Bzdok, Delmiro Fernandez-Reyes, Geraint Rees, and Parashkev
Nachev
- Abstract summary: Epistem equity is the comparative fidelity of intelligence in decision-making.
No general framework for its quantification, let alone assurance, exists.
We introduce a comprehensive framework for Representational Ethical Model.
- Score: 0.7078141380481605
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Equity is widely held to be fundamental to the ethics of healthcare. In the
context of clinical decision-making, it rests on the comparative fidelity of
the intelligence -- evidence-based or intuitive -- guiding the management of
each individual patient. Though brought to recent attention by the
individuating power of contemporary machine learning, such epistemic equity
arises in the context of any decision guidance, whether traditional or
innovative. Yet no general framework for its quantification, let alone
assurance, currently exists. Here we formulate epistemic equity in terms of
model fidelity evaluated over learnt multi-dimensional representations of
identity crafted to maximise the captured diversity of the population,
introducing a comprehensive framework for Representational Ethical Model
Calibration. We demonstrate use of the framework on large-scale multimodal data
from UK Biobank to derive diverse representations of the population, quantify
model performance, and institute responsive remediation. We offer our approach
as a principled solution to quantifying and assuring epistemic equity in
healthcare, with applications across the research, clinical, and regulatory
domains.
Related papers
- The challenge of uncertainty quantification of large language models in medicine [0.0]
This study investigates uncertainty quantification in large language models (LLMs) for medical applications.
Our research frames uncertainty not as a barrier but as an essential part of knowledge that invites a dynamic and reflective approach to AI design.
arXiv Detail & Related papers (2025-04-07T17:24:11Z) - Including frameworks of public health ethics in computational modelling of infectious disease interventions [36.437757915645385]
Many values recognised as important for ethical decision-making are missing from computational models.
We demonstrate a proof-of-concept approach to incorporate multiple public health values into the evaluation of a simple computational model for vaccination against a pathogen such as SARS-CoV-2.
arXiv Detail & Related papers (2025-01-31T04:22:25Z) - An Explainable Biomedical Foundation Model via Large-Scale Concept-Enhanced Vision-Language Pre-training [40.16314726875265]
ConceptCLIP is the first explainable biomedical foundation model that achieves state-of-the-art diagnostic accuracy.
We develop ConceptCLIP through a novel dual-alignment approach that simultaneously learns global image-text representations and fine-grained region-concept associations.
arXiv Detail & Related papers (2025-01-26T16:07:11Z) - Causal Representation Learning from Multimodal Biomedical Observations [57.00712157758845]
We develop flexible identification conditions for multimodal data and principled methods to facilitate the understanding of biomedical datasets.
Key theoretical contribution is the structural sparsity of causal connections between modalities.
Results on a real-world human phenotype dataset are consistent with established biomedical research.
arXiv Detail & Related papers (2024-11-10T16:40:27Z) - Named Clinical Entity Recognition Benchmark [2.9332007863461893]
This report introduces a Named Clinical Entity Recognition Benchmark.
It addresses the crucial natural language processing (NLP) task of extracting structured information from clinical narratives.
The leaderboard provides a standardized platform for assessing diverse language models.
arXiv Detail & Related papers (2024-10-07T14:00:18Z) - 3M-Health: Multimodal Multi-Teacher Knowledge Distillation for Mental Health Detection [9.469887408109251]
We introduce a Multimodal and Multi-Teacher Knowledge Distillation model for Mental Health Classification.
Unlike conventional approaches that often rely on simple concatenation to integrate diverse features, our model addresses the challenge of appropriately representing inputs of varying natures.
arXiv Detail & Related papers (2024-07-12T06:22:45Z) - Advancing Multimodal Data Fusion in Pain Recognition: A Strategy Leveraging Statistical Correlation and Human-Centered Perspectives [0.3749861135832073]
This research presents a novel multimodal data fusion methodology for pain behavior recognition.
We introduce two key innovations: 1) integrating data-driven statistical relevance weights into the fusion strategy, and 2) incorporating human-centric movement characteristics into multimodal representation learning.
Our findings have significant implications for promoting patient-centered healthcare interventions and supporting explainable clinical decision-making.
arXiv Detail & Related papers (2024-03-30T11:13:18Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - Multimodal Machine Learning in Image-Based and Clinical Biomedicine:
Survey and Prospects [2.1070612998322438]
The paper explores the transformative potential of multimodal models for clinical predictions.
Despite advancements, challenges such as data biases and the scarcity of "big data" in many biomedical domains persist.
arXiv Detail & Related papers (2023-11-04T05:42:51Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - In-Bed Human Pose Estimation from Unseen and Privacy-Preserving Image
Domains [22.92165116962952]
In-bed human posture estimation provides important health-related metrics with potential value in medical condition assessments.
We propose a multi-modal conditional variational autoencoder (MC-VAE) capable of reconstructing features from missing modalities seen during training.
We demonstrate that body positions can be effectively recognized from the available modality, achieving on par results with baseline models.
arXiv Detail & Related papers (2021-11-30T04:56:16Z) - The Medkit-Learn(ing) Environment: Medical Decision Modelling through
Simulation [81.72197368690031]
We present a new benchmarking suite designed specifically for medical sequential decision making.
The Medkit-Learn(ing) Environment is a publicly available Python package providing simple and easy access to high-fidelity synthetic medical data.
arXiv Detail & Related papers (2021-06-08T10:38:09Z) - Adversarial Sample Enhanced Domain Adaptation: A Case Study on
Predictive Modeling with Electronic Health Records [57.75125067744978]
We propose a data augmentation method to facilitate domain adaptation.
adversarially generated samples are used during domain adaptation.
Results confirm the effectiveness of our method and the generality on different tasks.
arXiv Detail & Related papers (2021-01-13T03:20:20Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.