Representation Learning with Information Theory for COVID-19 Detection
- URL: http://arxiv.org/abs/2207.01437v1
- Date: Mon, 4 Jul 2022 14:25:12 GMT
- Title: Representation Learning with Information Theory for COVID-19 Detection
- Authors: Abel D\'iaz Berenguer, Tanmoy Mukherjee, Matias Bossa, Nikos
Deligiannis, Hichem Sahli
- Abstract summary: We show how to aid deep models in discovering useful priors from data to learn their intrinsic properties.
Our model, which we call a dual role network (DRN), uses a dependency approach based on Least Squared Mutual Information (LSMI)
Experiments on CT based COVID-19 Detection and COVID-19 Severity Detection benchmarks demonstrate the effectiveness of our method.
- Score: 18.98329701403629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Successful data representation is a fundamental factor in machine learning
based medical imaging analysis. Deep Learning (DL) has taken an essential role
in robust representation learning. However, the inability of deep models to
generalize to unseen data can quickly overfit intricate patterns. Thereby, we
can conveniently implement strategies to aid deep models in discovering useful
priors from data to learn their intrinsic properties. Our model, which we call
a dual role network (DRN), uses a dependency maximization approach based on
Least Squared Mutual Information (LSMI). The LSMI leverages dependency measures
to ensure representation invariance and local smoothness. While prior works
have used information theory measures like mutual information, known to be
computationally expensive due to a density estimation step, our LSMI
formulation alleviates the issues of intractable mutual information estimation
and can be used to approximate it. Experiments on CT based COVID-19 Detection
and COVID-19 Severity Detection benchmarks demonstrate the effectiveness of our
method.
Related papers
- Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Max-Sliced Mutual Information [17.667315953598788]
Quantifying the dependence between high-dimensional random variables is central to statistical learning and inference.
Two classical methods are canonical correlation analysis (CCA), which identifies maximally correlated projected versions of the original variables, and Shannon's mutual information, which is a universal dependence measure.
This work proposes a middle ground in the form of a scalable information-theoretic generalization of CCA, termed max-sliced mutual information (mSMI)
arXiv Detail & Related papers (2023-09-28T06:49:25Z) - DCID: Deep Canonical Information Decomposition [84.59396326810085]
We consider the problem of identifying the signal shared between two one-dimensional target variables.
We propose ICM, an evaluation metric which can be used in the presence of ground-truth labels.
We also propose Deep Canonical Information Decomposition (DCID) - a simple, yet effective approach for learning the shared variables.
arXiv Detail & Related papers (2023-06-27T16:59:06Z) - Longitudinal detection of new MS lesions using Deep Learning [0.0]
We describe a deep-learning-based pipeline addressing the task of detecting and segmenting new MS lesions.
First, we propose to use transfer-learning from a model trained on a segmentation task using single time-points.
Second, we propose a data synthesis strategy to generate realistic longitudinal time-points with new lesions.
arXiv Detail & Related papers (2022-06-16T16:09:04Z) - Local Intrinsic Dimensionality Signals Adversarial Perturbations [28.328973408891834]
Local dimensionality (LID) is a local metric that describes the minimum number of latent variables required to describe each data point.
In this paper, we derive a lower-bound and an upper-bound for the LID value of a perturbed data point and demonstrate that the bounds, in particular the lower-bound, has a positive correlation with the magnitude of the perturbation.
arXiv Detail & Related papers (2021-09-24T08:29:50Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Bounding Information Leakage in Machine Learning [26.64770573405079]
This paper investigates fundamental bounds on information leakage.
We identify and bound the success rate of the worst-case membership inference attack.
We derive bounds on the mutual information between the sensitive attributes and model parameters.
arXiv Detail & Related papers (2021-05-09T08:49:14Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.