Risk score learning for COVID-19 contact tracing apps
- URL: http://arxiv.org/abs/2104.08415v1
- Date: Sat, 17 Apr 2021 00:55:36 GMT
- Title: Risk score learning for COVID-19 contact tracing apps
- Authors: Kevin Murphy and Abhishek Kumar and Stelios Serghiou
- Abstract summary: Digital contact tracing apps for COVID-19 need to estimate the risk that a user was infected during a particular exposure.
We show that machine learning methods can be used to automatically optimize the parameters of the risk score model.
- Score: 18.64835871177855
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digital contact tracing apps for COVID-19, such as the one developed by
Google and Apple, need to estimate the risk that a user was infected during a
particular exposure, in order to decide whether to notify the user to take
precautions, such as entering into quarantine, or requesting a test. Such risk
score models contain numerous parameters that must be set by the public health
authority. Although expert guidance for how to set these parameters has been
provided (e.g.
https://github.com/lfph/gaen-risk-scoring/blob/main/risk-scoring.md), it is
natural to ask if we could do better using a data-driven approach. This can be
particularly useful when the risk factors of the disease change, e.g., due to
the evolution of new variants, or the adoption of vaccines.
In this paper, we show that machine learning methods can be used to
automatically optimize the parameters of the risk score model, provided we have
access to exposure and outcome data. Although this data is already being
collected in an aggregated, privacy-preserving way by several health
authorities, in this paper we limit ourselves to simulated data, so that we can
systematically study the different factors that affect the feasibility of the
approach. In particular, we show that the parameters become harder to estimate
when there is more missing data (e.g., due to infections which were not
recorded by the app). Nevertheless, the learning approach outperforms a strong
manually designed baseline.
Related papers
- Certified Robustness to Data Poisoning in Gradient-Based Training [10.79739918021407]
We develop the first framework providing provable guarantees on the behavior of models trained with potentially manipulated data.
Our framework certifies robustness against untargeted and targeted poisoning, as well as backdoor attacks.
We demonstrate our approach on multiple real-world datasets from applications including energy consumption, medical imaging, and autonomous driving.
arXiv Detail & Related papers (2024-06-09T06:59:46Z) - Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - Adaptive White-Box Watermarking with Self-Mutual Check Parameters in
Deep Neural Networks [14.039159907367985]
Fragile watermarking is a technique used to identify tampering in AI models.
Previous methods have faced challenges including risks of omission, additional information transmission, and inability to locate tampering precisely.
We propose a method for detecting tampered parameters and bits, which can be used to detect, locate, and restore parameters that have been tampered with.
arXiv Detail & Related papers (2023-08-22T07:21:06Z) - Diagnosis Uncertain Models For Medical Risk Prediction [80.07192791931533]
We consider a patient risk model which has access to vital signs, lab values, and prior history but does not have access to a patient's diagnosis.
We show that such all-cause' risk models have good generalization across diagnoses but have a predictable failure mode.
We propose a fix for this problem by explicitly modeling the uncertainty in risk prediction coming from uncertainty in patient diagnoses.
arXiv Detail & Related papers (2023-06-29T23:36:04Z) - Autoregressive Perturbations for Data Poisoning [54.205200221427994]
Data scraping from social media has led to growing concerns regarding unauthorized use of data.
Data poisoning attacks have been proposed as a bulwark against scraping.
We introduce autoregressive (AR) poisoning, a method that can generate poisoned data without access to the broader dataset.
arXiv Detail & Related papers (2022-06-08T06:24:51Z) - MOAI: A methodology for evaluating the impact of indoor airflow in the
transmission of COVID-19 [37.38767180122748]
Epidemiology models play a key role in understanding and responding to the COVID-19 pandemic.
We present a model to evaluate the risk of a user for a given setting.
We then propose a temporal addition to the model to evaluate the risk exposure over time for a given user.
arXiv Detail & Related papers (2021-03-31T14:06:09Z) - Epidemic mitigation by statistical inference from contact tracing data [61.04165571425021]
We develop Bayesian inference methods to estimate the risk that an individual is infected.
We propose to use probabilistic risk estimation in order to optimize testing and quarantining strategies for the control of an epidemic.
Our approaches translate into fully distributed algorithms that only require communication between individuals who have recently been in contact.
arXiv Detail & Related papers (2020-09-20T12:24:45Z) - COVI White Paper [67.04578448931741]
Contact tracing is an essential tool to change the course of the Covid-19 pandemic.
We present an overview of the rationale, design, ethical considerations and privacy strategy of COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
arXiv Detail & Related papers (2020-05-18T07:40:49Z) - Orthogonal Statistical Learning [49.55515683387805]
We provide non-asymptotic excess risk guarantees for statistical learning in a setting where the population risk depends on an unknown nuisance parameter.
We show that if the population risk satisfies a condition called Neymanity, the impact of the nuisance estimation error on the excess risk bound achieved by the meta-algorithm is of second order.
arXiv Detail & Related papers (2019-01-25T02:21:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.