Towards Highly Expressive Machine Learning Models of Non-Melanoma Skin
Cancer
- URL: http://arxiv.org/abs/2207.05749v1
- Date: Sat, 9 Jul 2022 04:53:25 GMT
- Title: Towards Highly Expressive Machine Learning Models of Non-Melanoma Skin
Cancer
- Authors: Simon M. Thomas, James G. Lefevre, Glenn Baxter, Nicholas A.Hamilton
- Abstract summary: We present experiments in applying discrete modelling techniques to the problem domain of non-melanoma skin cancer.
We trained a sequence-to-sequence transformer to generate natural language descriptions using pathologist terminology.
The result is a promising means of working towards highly expressive machine learning systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Pathologists have a rich vocabulary with which they can describe all the
nuances of cellular morphology. In their world, there is a natural pairing of
images and words. Recent advances demonstrate that machine learning models can
now be trained to learn high-quality image features and represent them as
discrete units of information. This enables natural language, which is also
discrete, to be jointly modelled alongside the imaging, resulting in a
description of the contents of the imaging. Here we present experiments in
applying discrete modelling techniques to the problem domain of non-melanoma
skin cancer, specifically, histological images of Intraepidermal Carcinoma
(IEC). Implementing a VQ-GAN model to reconstruct high-resolution (256x256)
images of IEC images, we trained a sequence-to-sequence transformer to generate
natural language descriptions using pathologist terminology. Combined with the
idea of interactive concept vectors available by using continuous generative
methods, we demonstrate an additional angle of interpretability. The result is
a promising means of working towards highly expressive machine learning systems
which are not only useful as predictive/classification tools, but also means to
further our scientific understanding of disease.
Related papers
- Autoregressive Sequence Modeling for 3D Medical Image Representation [48.706230961589924]
We introduce a pioneering method for learning 3D medical image representations through an autoregressive sequence pre-training framework.
Our approach various 3D medical images based on spatial, contrast, and semantic correlations, treating them as interconnected visual tokens within a token sequence.
arXiv Detail & Related papers (2024-09-13T10:19:10Z) - Hierarchical Text-to-Vision Self Supervised Alignment for Improved Histopathology Representation Learning [64.1316997189396]
We present a novel language-tied self-supervised learning framework, Hierarchical Language-tied Self-Supervision (HLSS) for histopathology images.
Our resulting model achieves state-of-the-art performance on two medical imaging benchmarks, OpenSRH and TCGA datasets.
arXiv Detail & Related papers (2024-03-21T17:58:56Z) - In-context learning enables multimodal large language models to classify
cancer pathology images [0.7085801706650957]
In language processing, in-context learning provides an alternative, where models learn from within prompts, bypassing the need for parameter updates.
Here, we systematically evaluate the model Generative Pretrained Transformer 4 with Vision capabilities (GPT-4V) on cancer image processing with in-context learning.
Our results show that in-context learning is sufficient to match or even outperform specialized neural networks trained for particular tasks, while only requiring a minimal number of samples.
arXiv Detail & Related papers (2024-03-12T08:34:34Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - BiomedJourney: Counterfactual Biomedical Image Generation by
Instruction-Learning from Multimodal Patient Journeys [99.7082441544384]
We present BiomedJourney, a novel method for counterfactual biomedical image generation by instruction-learning.
We use GPT-4 to process the corresponding imaging reports and generate a natural language description of disease progression.
The resulting triples are then used to train a latent diffusion model for counterfactual biomedical image generation.
arXiv Detail & Related papers (2023-10-16T18:59:31Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - DEPAS: De-novo Pathology Semantic Masks using a Generative Model [0.0]
We introduce a scalable generative model, coined as DEPAS, that captures tissue structure and generates high-resolution semantic masks with state-of-the-art quality.
We demonstrate the ability of DEPAS to generate realistic semantic maps of tissue for three types of organs: skin, prostate, and lung.
arXiv Detail & Related papers (2023-02-13T16:48:33Z) - RoentGen: Vision-Language Foundation Model for Chest X-ray Generation [7.618389245539657]
We develop a strategy to overcome the large natural-medical distributional shift by adapting a pre-trained latent diffusion model on a corpus of publicly available chest x-rays.
We investigate the model's ability to generate high-fidelity, diverse synthetic CXR conditioned on text prompts.
We present evidence that the resulting model (RoentGen) is able to create visually convincing, diverse synthetic CXR images.
arXiv Detail & Related papers (2022-11-23T06:58:09Z) - Deep Learning Generates Synthetic Cancer Histology for Explainability
and Education [37.13457398561086]
Conditional generative adversarial networks (cGANs) are AI models that generate synthetic images.
We describe the use of a cGAN for explaining models trained to classify molecularly-subtyped tumors.
We show that clear, intuitive cGAN visualizations can reinforce and improve human understanding of histologic manifestations of tumor biology.
arXiv Detail & Related papers (2022-11-12T00:14:57Z) - Deepfake histological images for enhancing digital pathology [0.40631409309544836]
We develop a generative adversarial network model that synthesizes pathology images constrained by class labels.
We investigate the ability of this framework in synthesizing realistic prostate and colon tissue images.
We extend the approach to significantly more complex images from colon biopsies and show that the complex microenvironment in such tissues can also be reproduced.
arXiv Detail & Related papers (2022-06-16T17:11:08Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.