Interactive Visual Study of Multiple Attributes Learning Model of X-Ray
Scattering Images
- URL: http://arxiv.org/abs/2009.02256v1
- Date: Thu, 3 Sep 2020 00:38:45 GMT
- Title: Interactive Visual Study of Multiple Attributes Learning Model of X-Ray
Scattering Images
- Authors: Xinyi Huang, Suphanut Jamonnak, Ye Zhao, Boyu Wang, Minh Hoai, Kevin
Yager, Wei Xu
- Abstract summary: We present an interactive system for domain scientists to visually study the multiple attributes learning models applied to x-ray scattering images.
The exploration is guided by the manifestation of model performance related to mutual relationships among attributes.
The system thus supports domain scientists to improve the training dataset and model, find questionable attributes labels, and identify outlier images or spurious data clusters.
- Score: 34.95218692917125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing interactive visualization tools for deep learning are mostly applied
to the training, debugging, and refinement of neural network models working on
natural images. However, visual analytics tools are lacking for the specific
application of x-ray image classification with multiple structural attributes.
In this paper, we present an interactive system for domain scientists to
visually study the multiple attributes learning models applied to x-ray
scattering images. It allows domain scientists to interactively explore this
important type of scientific images in embedded spaces that are defined on the
model prediction output, the actual labels, and the discovered feature space of
neural networks. Users are allowed to flexibly select instance images, their
clusters, and compare them regarding the specified visual representation of
attributes. The exploration is guided by the manifestation of model performance
related to mutual relationships among attributes, which often affect the
learning accuracy and effectiveness. The system thus supports domain scientists
to improve the training dataset and model, find questionable attributes labels,
and identify outlier images or spurious data clusters. Case studies and
scientists feedback demonstrate its functionalities and usefulness.
Related papers
- Improving Human-Object Interaction Detection via Virtual Image Learning [68.56682347374422]
Human-Object Interaction (HOI) detection aims to understand the interactions between humans and objects.
In this paper, we propose to alleviate the impact of such an unbalanced distribution via Virtual Image Leaning (VIL)
A novel label-to-image approach, Multiple Steps Image Creation (MUSIC), is proposed to create a high-quality dataset that has a consistent distribution with real images.
arXiv Detail & Related papers (2023-08-04T10:28:48Z) - Interactive Visual Feature Search [8.255656003475268]
We introduce Visual Feature Search, a novel interactive visualization that is adaptable to any CNN.
Our tool allows a user to highlight an image region and search for images from a given dataset with the most similar model features.
We demonstrate how our tool elucidates different aspects of model behavior by performing experiments on a range of applications, such as in medical imaging and wildlife classification.
arXiv Detail & Related papers (2022-11-28T04:39:03Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Learning multi-scale functional representations of proteins from
single-cell microscopy data [77.34726150561087]
We show that simple convolutional networks trained on localization classification can learn protein representations that encapsulate diverse functional information.
We also propose a robust evaluation strategy to assess quality of protein representations across different scales of biological function.
arXiv Detail & Related papers (2022-05-24T00:00:07Z) - Attribute Prototype Network for Any-Shot Learning [113.50220968583353]
We argue that an image representation with integrated attribute localization ability would be beneficial for any-shot, i.e. zero-shot and few-shot, image classification tasks.
We propose a novel representation learning framework that jointly learns global and local features using only class-level attributes.
arXiv Detail & Related papers (2022-04-04T02:25:40Z) - Self-supervised Contrastive Learning for Cross-domain Hyperspectral
Image Representation [26.610588734000316]
This paper introduces a self-supervised learning framework suitable for hyperspectral images that are inherently challenging to annotate.
The proposed framework architecture leverages cross-domain CNN, allowing for learning representations from different hyperspectral images.
The experimental results demonstrate the advantage of the proposed self-supervised representation over models trained from scratch or other transfer learning methods.
arXiv Detail & Related papers (2022-02-08T16:16:45Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - DoFE: Domain-oriented Feature Embedding for Generalizable Fundus Image
Segmentation on Unseen Datasets [96.92018649136217]
We present a novel Domain-oriented Feature Embedding (DoFE) framework to improve the generalization ability of CNNs on unseen target domains.
Our DoFE framework dynamically enriches the image features with additional domain prior knowledge learned from multi-source domains.
Our framework generates satisfying segmentation results on unseen datasets and surpasses other domain generalization and network regularization methods.
arXiv Detail & Related papers (2020-10-13T07:28:39Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z) - Unsupervised Domain Attention Adaptation Network for Caricature
Attribute Recognition [23.95731281719786]
Caricature attributes provide distinctive facial features to help research in Psychology and Neuroscience.
Unlike the facial photo attribute datasets that have a quantity of annotated images, the annotations of caricature attributes are rare.
We propose a caricature attribute dataset, namely WebCariA, to facility the research in attribute learning of caricatures.
arXiv Detail & Related papers (2020-07-18T06:38:45Z) - FDive: Learning Relevance Models using Pattern-based Similarity Measures [27.136998442865217]
We present FDive, a visual active learning system that helps to create visually explorable relevance models.
Based on the best-ranked similarity measure, the system calculates an interactive Self-Organizing Map-based relevance model.
It also automatically prompts further relevance feedback to improve its accuracy.
arXiv Detail & Related papers (2019-07-29T15:37:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.