Using a Cross-Task Grid of Linear Probes to Interpret CNN Model
Predictions On Retinal Images
- URL: http://arxiv.org/abs/2107.11468v1
- Date: Fri, 23 Jul 2021 21:30:27 GMT
- Title: Using a Cross-Task Grid of Linear Probes to Interpret CNN Model
Predictions On Retinal Images
- Authors: Katy Blumer, Subhashini Venugopalan, Michael P. Brenner, Jon Kleinberg
- Abstract summary: We analyze a dataset of retinal images using linear probes: linear regression models trained on some "target" task, using embeddings from a deep convolutional (CNN) model trained on some "source" task as input.
We use this method across all possible pairings of 93 tasks in the UK Biobank dataset of retinal images, leading to 164k different models.
- Score: 3.5789352263336847
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We analyze a dataset of retinal images using linear probes: linear regression
models trained on some "target" task, using embeddings from a deep
convolutional (CNN) model trained on some "source" task as input. We use this
method across all possible pairings of 93 tasks in the UK Biobank dataset of
retinal images, leading to ~164k different models. We analyze the performance
of these linear probes by source and target task and by layer depth. We observe
that representations from the middle layers of the network are more
generalizable. We find that some target tasks are easily predicted irrespective
of the source task, and that some other target tasks are more accurately
predicted from correlated source tasks than from embeddings trained on the same
task.
Related papers
- MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining [73.81862342673894]
Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks.
transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.
We conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection.
Our models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection.
arXiv Detail & Related papers (2024-03-20T09:17:22Z) - Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - Diffused Redundancy in Pre-trained Representations [98.55546694886819]
We take a closer look at how features are encoded in pre-trained representations.
We find that learned representations in a given layer exhibit a degree of diffuse redundancy.
Our findings shed light on the nature of representations learned by pre-trained deep neural networks.
arXiv Detail & Related papers (2023-05-31T21:00:50Z) - TRAK: Attributing Model Behavior at Scale [79.56020040993947]
We present TRAK (Tracing with Randomly-trained After Kernel), a data attribution method that is both effective and computationally tractable for large-scale, differenti models.
arXiv Detail & Related papers (2023-03-24T17:56:22Z) - Task Discovery: Finding the Tasks that Neural Networks Generalize on [1.4043229953691112]
We show that one set of images can give rise to many tasks on which neural networks generalize well.
As an example, we show that the discovered tasks can be used to automatically create adversarial train-test splits.
arXiv Detail & Related papers (2022-12-01T03:57:48Z) - TransformNet: Self-supervised representation learning through predicting
geometric transformations [0.8098097078441623]
We describe the unsupervised semantic feature learning approach for recognition of the geometric transformation applied to the input data.
The basic concept of our approach is that if someone is unaware of the objects in the images, he/she would not be able to quantitatively predict the geometric transformation that was applied to them.
arXiv Detail & Related papers (2022-02-08T22:41:01Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - Learning Co-segmentation by Segment Swapping for Retrieval and Discovery [67.6609943904996]
The goal of this work is to efficiently identify visually similar patterns from a pair of images.
We generate synthetic training pairs by selecting object segments in an image and copy-pasting them into another image.
We show our approach provides clear improvements for artwork details retrieval on the Brueghel dataset.
arXiv Detail & Related papers (2021-10-29T16:51:16Z) - Dataset for eye-tracking tasks [0.0]
We present a dataset that is suitable for training custom models of convolutional neural networks for eye-tracking tasks.
This dataset contains 10,000 eye images in an extension of 416 by 416 pixels.
This manuscript can be considered as a guide for the preparation of datasets for eye-tracking devices.
arXiv Detail & Related papers (2021-06-01T23:54:23Z) - Representation Learning Beyond Linear Prediction Functions [33.94130046391917]
We show that diversity can be achieved when source tasks and the target task use different prediction function spaces beyond linear functions.
For a general function class, we find that eluder dimension gives a lower bound on the number of tasks required for diversity.
arXiv Detail & Related papers (2021-05-31T14:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.