The Future AI in Healthcare: A Tsunami of False Alarms or a Product of
Experts?
- URL: http://arxiv.org/abs/2007.10502v2
- Date: Mon, 27 Jul 2020 03:26:46 GMT
- Title: The Future AI in Healthcare: A Tsunami of False Alarms or a Product of
Experts?
- Authors: Gari D. Clifford
- Abstract summary: I argue that most, if not all, of these publications or commercial algorithms make several fundamental errors.
We should vote many algorithms together, weighted by their overall performance, their independence from each other, and a set of features that define the context.
- Score: 3.8244083622687306
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent significant increases in affordable and accessible computational power
and data storage have enabled machine learning to provide almost unbelievable
classification and prediction performances compared to well-trained humans.
There have been some promising (but limited) results in the complex healthcare
landscape, particularly in imaging. This promise has led some individuals to
leap to the conclusion that we will solve an ever-increasing number of problems
in human health and medicine by applying `artificial intelligence' to `big
(medical) data'. The scientific literature has been inundated with algorithms,
outstripping our ability to review them effectively. Unfortunately, I argue
that most, if not all of these publications or commercial algorithms make
several fundamental errors. I argue that because everyone (and therefore every
algorithm) has blind spots, there are multiple `best' algorithms, each of which
excels on different types of patients or in different contexts. Consequently,
we should vote many algorithms together, weighted by their overall performance,
their independence from each other, and a set of features that define the
context (i.e., the features that maximally discriminate between the situations
when one algorithm outperforms another). This approach not only provides a
better performing classifier or predictor but provides confidence intervals so
that a clinician can judge how to respond to an alert. Moreover, I argue that a
sufficient number of (mostly) independent algorithms that address the same
problem can be generated through a large international competition/challenge,
lasting many months and define the conditions for a successful event. Finally,
I propose introducing the requirement for major grantees to run challenges in
the final year of funding to maximize the value of research and select a new
generation of grantees.
Related papers
- AI-Assisted Decision Making with Human Learning [8.598431584462944]
In many cases, despite the algorithm's superior performance, the final decision remains in human hands.
This paper studies such AI-assisted decision-making settings, where the human learns through repeated interactions with the algorithm.
We observe that the discrepancy between the algorithm's model and the human's model creates a fundamental tradeoff.
arXiv Detail & Related papers (2025-02-18T17:08:21Z) - Optimal Multi-Objective Best Arm Identification with Fixed Confidence [62.36929749450298]
We consider a multi-armed bandit setting in which each arm yields an $M$-dimensional vector reward upon selection.
The end goal is to identify the best arm of em every objective in the shortest (expected) time subject to an upper bound on the probability of error.
We propose an algorithm that uses the novel idea of em surrogate proportions to sample the arms at each time step, eliminating the need to solve the max-min optimisation problem at each step.
arXiv Detail & Related papers (2025-01-23T12:28:09Z) - When can you trust feature selection? -- I: A condition-based analysis
of LASSO and generalised hardness of approximation [49.1574468325115]
We show how no (randomised) algorithm can determine the correct support sets (with probability $> 1/2$) of minimisers of LASSO when reading approximate input.
For ill-posed inputs, the algorithm runs forever, hence, it will never produce a wrong answer.
For any algorithm defined on an open set containing a point with infinite condition number, there is an input for which the algorithm will either run forever or produce a wrong answer.
arXiv Detail & Related papers (2023-12-18T18:29:01Z) - Who Should Predict? Exact Algorithms For Learning to Defer to Humans [40.22768241509553]
We show that prior approaches can fail to find a human-AI system with low misclassification error.
We give a mixed-integer-linear-programming (MILP) formulation that can optimally solve the problem in the linear setting.
We provide a novel surrogate loss function that is realizable-consistent and performs well empirically.
arXiv Detail & Related papers (2023-01-15T21:57:36Z) - Stochastic Differentially Private and Fair Learning [7.971065005161566]
We provide the first differentially private algorithm for fair learning that is guaranteed to converge.
Our framework is flexible enough to permit different fairness, including demographic parity and equalized odds.
Our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes.
arXiv Detail & Related papers (2022-10-17T06:54:57Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - DPER: Efficient Parameter Estimation for Randomly Missing Data [0.24466725954625884]
We propose novel algorithms to find the maximum likelihood estimates (MLEs) for a one-class/multiple-class randomly missing data set.
Our algorithms do not require multiple iterations through the data, thus promising to be less time-consuming than other methods.
arXiv Detail & Related papers (2021-06-06T16:37:48Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Resource Allocation in Multi-armed Bandit Exploration: Overcoming
Sublinear Scaling with Adaptive Parallelism [107.48538091418412]
We study exploration in multi-armed bandits when we have access to a divisible resource that can be allocated in varying amounts to arm pulls.
We focus in particular on the allocation of distributed computing resources, where we may obtain results faster by allocating more resources per pull.
arXiv Detail & Related papers (2020-10-31T18:19:29Z) - New Oracle-Efficient Algorithms for Private Synthetic Data Release [52.33506193761153]
We present three new algorithms for constructing differentially private synthetic data.
The algorithms satisfy differential privacy even in the worst case.
Compared to the state-of-the-art method High-Dimensional Matrix Mechanism citeMcKennaMHM18, our algorithms provide better accuracy in the large workload.
arXiv Detail & Related papers (2020-07-10T15:46:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.