TIAViz: A Browser-based Visualization Tool for Computational Pathology
Models
- URL: http://arxiv.org/abs/2402.09990v1
- Date: Thu, 15 Feb 2024 14:54:46 GMT
- Title: TIAViz: A Browser-based Visualization Tool for Computational Pathology
Models
- Authors: Mark Eastwood and John Pocock and Mostafa Jahanifar and Adam Shephard
and Skiros Habib and Ethar Alzaid and Abdullah Alsalemi and Jan Lukas
Robertus and Nasir Rajpoot and Shan Raza and Fayyaz Minhas
- Abstract summary: We introduce TIAViz, a Python-based visualization tool built into TIAToolbox.
It allows flexible, interactive, fully zoomable overlay of a wide variety of information onto whole slide images.
- Score: 0.6519788717471032
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digital pathology has gained significant traction in modern healthcare
systems. This shift from optical microscopes to digital imagery brings with it
the potential for improved diagnosis, efficiency, and the integration of AI
tools into the pathologists workflow. A critical aspect of this is
visualization. Throughout the development of a machine learning (ML) model in
digital pathology, it is crucial to have flexible, openly available tools to
visualize models, from their outputs and predictions to the underlying
annotations and images used to train or test a model. We introduce TIAViz, a
Python-based visualization tool built into TIAToolbox which allows flexible,
interactive, fully zoomable overlay of a wide variety of information onto whole
slide images, including graphs, heatmaps, segmentations, annotations and other
WSIs. The UI is browser-based, allowing use either locally, on a remote
machine, or on a server to provide publicly available demos. This tool is open
source and is made available at:
https://github.com/TissueImageAnalytics/tiatoolbox and via pip installation
(pip install tiatoolbox) and conda as part of TIAToolbox.
Related papers
- VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models [89.63342806812413]
We present an open-source toolkit for evaluating large multi-modality models based on PyTorch.
VLMEvalKit implements over 70 different large multi-modality models, including both proprietary APIs and open-source models.
We host OpenVLM Leaderboard to track the progress of multi-modality learning research.
arXiv Detail & Related papers (2024-07-16T13:06:15Z) - InternVL: Scaling up Vision Foundation Models and Aligning for Generic
Visual-Linguistic Tasks [92.03764152132315]
We design a large-scale vision-language foundation model (InternVL), which scales up the vision foundation model to 6 billion parameters.
This model can be broadly applied to and achieve state-of-the-art performance on 32 generic visual-linguistic benchmarks.
It has powerful visual capabilities and can be a good alternative to the ViT-22B.
arXiv Detail & Related papers (2023-12-21T18:59:31Z) - HistoColAi: An Open-Source Web Platform for Collaborative Digital
Histology Image Annotation with AI-Driven Predictive Integration [1.5291251918989404]
Digital pathology has become a standard in the pathology workflow due to its many benefits.
Recent advances in deep learning-based methods for image analysis make them of potential aid in digital pathology.
This paper proposes a web service that efficiently provides a tool to visualize and annotate digitized histological images.
arXiv Detail & Related papers (2023-07-11T10:41:09Z) - PiML Toolbox for Interpretable Machine Learning Model Development and
Diagnostics [10.635578367440162]
PiML is an integrated and open-access Python toolbox for interpretable machine learning model development and model diagnostics.
It is designed with machine learning in both low-code and high-code modes, including data pipeline, model training and tuning, model interpretation and explanation.
arXiv Detail & Related papers (2023-05-07T08:19:07Z) - DINOv2: Learning Robust Visual Features without Supervision [75.42921276202522]
This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources.
Most of the technical contributions aim at accelerating and stabilizing the training at scale.
In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature.
arXiv Detail & Related papers (2023-04-14T15:12:19Z) - Slideflow: Deep Learning for Digital Histopathology with Real-Time
Whole-Slide Visualization [49.62449457005743]
We develop a flexible deep learning library for histopathology called Slideflow.
It supports a broad array of deep learning methods for digital pathology.
It includes a fast whole-slide interface for deploying trained models.
arXiv Detail & Related papers (2023-04-09T02:49:36Z) - Interactive Visual Feature Search [8.255656003475268]
We introduce Visual Feature Search, a novel interactive visualization that is adaptable to any CNN.
Our tool allows a user to highlight an image region and search for images from a given dataset with the most similar model features.
We demonstrate how our tool elucidates different aspects of model behavior by performing experiments on a range of applications, such as in medical imaging and wildlife classification.
arXiv Detail & Related papers (2022-11-28T04:39:03Z) - Interactive Segmentation and Visualization for Tiny Objects in
Multi-megapixel Images [5.09193568605539]
We introduce an interactive image segmentation and visualization framework for identifying, inspecting, and editing tiny objects in large multi-megapixel high-range images.
We developed an interactive toolkit that unifies inference model, HDR image visualization, segmentation mask inspection and editing into a single graphical user interface.
Our interface features mouse-controlled, synchronized, dual-window visualization of the image and the segmentation mask, a critical feature for locating tiny objects in multi-megapixel images.
arXiv Detail & Related papers (2022-04-21T18:26:48Z) - Interactive Visualization of Protein RINs using NetworKit in the Cloud [57.780880387925954]
In this paper, we consider an example from protein dynamics, specifically residue interaction networks (RINs)
We use NetworKit to build a cloud-based environment that enables domain scientists to run their visualization and analysis on large compute servers.
To demonstrate the versatility of this approach, we use it to build a custom Jupyter-based widget for RIN visualization.
arXiv Detail & Related papers (2022-03-02T17:41:45Z) - Design of a Graphical User Interface for Few-Shot Machine Learning
Classification of Electron Microscopy Data [0.23453441553817042]
We develop a Python-based graphical user interface that enables end users to easily conduct and visualize the output of few-shot learning models.
This interface is lightweight and can be hosted locally or on the web, providing the opportunity to reproducibly conduct, share, and crowd-source few-shot analyses.
arXiv Detail & Related papers (2021-07-21T23:02:33Z) - TorchIO: A Python library for efficient loading, preprocessing,
augmentation and patch-based sampling of medical images in deep learning [68.8204255655161]
We present TorchIO, an open-source Python library to enable efficient loading, preprocessing, augmentation and patch-based sampling of medical images for deep learning.
TorchIO follows the style of PyTorch and integrates standard medical image processing libraries to efficiently process images during training of neural networks.
It includes a command-line interface which allows users to apply transforms to image files without using Python.
arXiv Detail & Related papers (2020-03-09T13:36:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.