Combining Image Features and Patient Metadata to Enhance Transfer
Learning
- URL: http://arxiv.org/abs/2110.05239v1
- Date: Fri, 8 Oct 2021 15:43:31 GMT
- Title: Combining Image Features and Patient Metadata to Enhance Transfer
Learning
- Authors: Spencer A. Thomas
- Abstract summary: We compare the performance of six state-of-the-art deep neural networks in classification tasks when using only image features, to when these are combined with patient metadata.
Our results indicate that this performance enhancement may be a general property of deep networks and should be explored in other areas.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we compare the performance of six state-of-the-art deep neural
networks in classification tasks when using only image features, to when these
are combined with patient metadata. We utilise transfer learning from networks
pretrained on ImageNet to extract image features from the ISIC HAM10000 dataset
prior to classification. Using several classification performance metrics, we
evaluate the effects of including metadata with the image features.
Furthermore, we repeat our experiments with data augmentation. Our results show
an overall enhancement in performance of each network as assessed by all
metrics, only noting degradation in a vgg16 architecture. Our results indicate
that this performance enhancement may be a general property of deep networks
and should be explored in other areas. Moreover, these improvements come at a
negligible additional cost in computation time, and therefore are a practical
method for other applications.
Related papers
- Improving Neural Surface Reconstruction with Feature Priors from Multi-View Image [87.00660347447494]
Recent advancements in Neural Surface Reconstruction (NSR) have significantly improved multi-view reconstruction when coupled with volume rendering.
We propose an investigation into feature-level consistent loss, aiming to harness valuable feature priors from diverse pretext visual tasks.
Our results, analyzed on DTU and EPFL, reveal that feature priors from image matching and multi-view stereo datasets outperform other pretext tasks.
arXiv Detail & Related papers (2024-08-04T16:09:46Z) - An evaluation of pre-trained models for feature extraction in image
classification [0.0]
This work aims to compare the performance of different pre-trained neural networks for feature extraction in image classification tasks.
Our results demonstrate that the best general performance along the datasets was achieved by CLIP-ViT-B and ViT-H-14, where the CLIP-ResNet50 model had similar performance but with less variability.
arXiv Detail & Related papers (2023-10-03T13:28:14Z) - Improving Image Recognition by Retrieving from Web-Scale Image-Text Data [68.63453336523318]
We introduce an attention-based memory module, which learns the importance of each retrieved example from the memory.
Compared to existing approaches, our method removes the influence of the irrelevant retrieved examples, and retains those that are beneficial to the input query.
We show that it achieves state-of-the-art accuracies in ImageNet-LT, Places-LT and Webvision datasets.
arXiv Detail & Related papers (2023-04-11T12:12:05Z) - Skip-Attention: Improving Vision Transformers by Paying Less Attention [55.47058516775423]
Vision computation transformers (ViTs) use expensive self-attention operations in every layer.
We propose SkipAt, a method to reuse self-attention from preceding layers to approximate attention at one or more subsequent layers.
We show the effectiveness of our method in image classification and self-supervised learning on ImageNet-1K, semantic segmentation on ADE20K, image denoising on SIDD, and video denoising on DAVIS.
arXiv Detail & Related papers (2023-01-05T18:59:52Z) - Enhanced Transfer Learning Through Medical Imaging and Patient
Demographic Data Fusion [0.0]
We examine the performance enhancement in classification of medical imaging data when image features are combined with associated non-image data.
We utilise transfer learning with networks pretrained on ImageNet used directly as feature extractors and fine tuned on the target domain.
arXiv Detail & Related papers (2021-11-29T09:11:52Z) - Exploiting the relationship between visual and textual features in
social networks for image classification with zero-shot deep learning [0.0]
In this work, we propose a classifier ensemble based on the transferable learning capabilities of the CLIP neural network architecture.
Our experiments, based on image classification tasks according to the labels of the Places dataset, are performed by first considering only the visual part.
Considering the associated texts to the images can help to improve the accuracy depending on the goal.
arXiv Detail & Related papers (2021-07-08T10:54:59Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Fusion of CNNs and statistical indicators to improve image
classification [65.51757376525798]
Convolutional Networks have dominated the field of computer vision for the last ten years.
Main strategy to prolong this trend relies on further upscaling networks in size.
We hypothesise that adding heterogeneous sources of information may be more cost-effective to a CNN than building a bigger network.
arXiv Detail & Related papers (2020-12-20T23:24:31Z) - Data Augmentation for Meta-Learning [58.47185740820304]
meta-learning algorithms sample data, query data, and tasks on each training step.
Data augmentation can be used not only to expand the number of images available per class, but also to generate entirely new classes/tasks.
Our proposed meta-specific data augmentation significantly improves the performance of meta-learners on few-shot classification benchmarks.
arXiv Detail & Related papers (2020-10-14T13:48:22Z) - Towards Improved Human Action Recognition Using Convolutional Neural
Networks and Multimodal Fusion of Depth and Inertial Sensor Data [1.52292571922932]
This paper attempts at improving the accuracy of Human Action Recognition (HAR) by fusion of depth and inertial sensor data.
We transform the depth data into Sequential Front view Images(SFI) and fine-tune the pre-trained AlexNet on these images.
Inertial data is converted into Signal Images (SI) and another convolutional neural network (CNN) is trained on these images.
arXiv Detail & Related papers (2020-08-22T03:41:34Z) - Learning Test-time Augmentation for Content-based Image Retrieval [42.188013259368766]
Off-the-shelf convolutional neural network features achieve outstanding results in many image retrieval tasks.
Existing image retrieval approaches require fine-tuning or modification of pre-trained networks to adapt to variations unique to the target data.
Our method enhances the invariance of off-the-shelf features by aggregating features extracted from images augmented at test-time, with augmentations guided by a policy learned through reinforcement learning.
arXiv Detail & Related papers (2020-02-05T05:08:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.