Foundation Models for Zero-Shot Segmentation of Scientific Images without AI-Ready Data
- URL: http://arxiv.org/abs/2506.24039v1
- Date: Mon, 30 Jun 2025 16:45:23 GMT
- Title: Foundation Models for Zero-Shot Segmentation of Scientific Images without AI-Ready Data
- Authors: Shubhabrata Mukherjee, Jack Lang, Obeen Kwon, Iryna Zenyuk, Valerie Brogden, Adam Weber, Daniela Ushizima,
- Abstract summary: Zenesis is a comprehensive no-code interactive platform designed to minimize barriers posed by data readiness for scientific images.<n>We develop lightweight multi-modal adaptation techniques that enable zero-shot operation on raw scientific data.<n>Our results demonstrate that Zenesis is a powerful tool for scientific applications, particularly in fields where high-quality annotated datasets are unavailable.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot and prompt-based technologies capitalized on using frequently occurring images to transform visual reasoning tasks, which explains why such technologies struggle with valuable yet scarce scientific image sets. In this work, we propose Zenesis, a comprehensive no-code interactive platform designed to minimize barriers posed by data readiness for scientific images. We develop lightweight multi-modal adaptation techniques that enable zero-shot operation on raw scientific data, along with human-in-the-loop refinement and heuristic-based temporal enhancement options. We demonstrate the performance of our approach through comprehensive comparison and validation on challenging Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) data of catalyst-loaded membranes. Zenesis significantly outperforms baseline methods, achieving an average accuracy of 0.947, an Intersection over Union (IOU) of 0.858, and a Dice score of 0.923 for amorphous catalyst samples and accuracy of 0.987, an IOU of 0.857, and a Dice score of 0.923 for crystalline samples. These results mark a substantial improvement over traditional methods like Otsu thresholding and even advanced models like Segment Anything Model (SAM) when used in isolation. Our results demonstrate that Zenesis is a powerful tool for scientific applications, particularly in fields where high-quality annotated datasets are unavailable, accelerating accurate analysis of experimental imaging.
Related papers
- Synthesizing Reality: Leveraging the Generative AI-Powered Platform Midjourney for Construction Worker Detection [0.3011426942929757]
This study presents a novel image synthesis methodology tailored for construction worker detection.<n>The approach entails generating a collection of 12,000 synthetic images by formulating 3000 different prompts.<n> Evaluation on a real construction image dataset yielded promising results.
arXiv Detail & Related papers (2025-07-17T15:35:27Z) - HistoART: Histopathology Artifact Detection and Reporting Tool [37.31105955164019]
Whole Slide Imaging (WSI) is widely used to digitize tissue specimens for detailed, high-resolution examination.<n>WSI remains vulnerable to artifacts introduced during slide preparation and scanning.<n>We propose and compare three robust artifact detection approaches for WSIs.
arXiv Detail & Related papers (2025-06-23T17:22:19Z) - Appeal prediction for AI up-scaled Images [45.61706071739717]
We describe our developed dataset, which uses 136 base images and five different up-scaling methods.<n>We evaluate the appeal of the different methods, and the results indicate that Real-ESRGAN and BSRGAN are the best.<n>In addition to this, we evaluate state-of-the-art image appeal and quality models, here none of the models showed a high prediction performance.
arXiv Detail & Related papers (2025-02-19T13:45:24Z) - Efficient Brain Tumor Classification with Lightweight CNN Architecture: A Novel Approach [0.0]
Brain tumor classification using MRI images is critical in medical diagnostics, where early and accurate detection significantly impacts patient outcomes.<n>Recent advancements in deep learning (DL) have shown promise, but many models struggle with balancing accuracy and computational efficiency.<n>We propose a novel model architecture integrating separable convolutions and squeeze and excitation (SE) blocks, designed to enhance feature extraction while maintaining computational efficiency.
arXiv Detail & Related papers (2025-02-01T21:06:42Z) - Merging synthetic and real embryo data for advanced AI predictions [69.07284335967019]
We train two generative models using two datasets-one we created and made publicly available, and one existing public dataset-to generate synthetic embryo images at various cell stages.<n>These were combined with real images to train classification models for embryo cell stage prediction.<n>Our results demonstrate that incorporating synthetic images alongside real data improved classification performance, with the model achieving 97% accuracy compared to 94.5% when trained solely on real data.
arXiv Detail & Related papers (2024-12-02T08:24:49Z) - Dataset Distillation for Histopathology Image Classification [46.04496989951066]
We introduce a novel dataset distillation algorithm tailored for histopathology image datasets (Histo-DD)
We conduct a comprehensive evaluation of the effectiveness of the proposed algorithm and the generated histopathology samples in both patch-level and slide-level classification tasks.
arXiv Detail & Related papers (2024-08-19T05:53:38Z) - Distributional Drift Detection in Medical Imaging with Sketching and Fine-Tuned Transformer [2.7552551107566137]
This paper presents an accurate and sensitive approach to detect distributional drift in CT-scan medical images.<n>We developed a robust baseline library model for real-time anomaly detection, allowing for efficient comparison of incoming images.<n>We fine-tuned a pre-trained Vision Transformer model to extract relevant features, using mammography as a case study.
arXiv Detail & Related papers (2024-08-15T23:46:37Z) - Accelerating Domain-Aware Electron Microscopy Analysis Using Deep Learning Models with Synthetic Data and Image-Wide Confidence Scoring [0.0]
We create a physics-based synthetic image and data generator, resulting in a machine learning model that achieves comparable precision (0.86), recall (0.63), F1 scores (0.71), and engineering property predictions (R2=0.82)
Our study demonstrates that synthetic data can eliminate human reliance in ML and provides a means for domain awareness in cases where many feature detections per image are needed.
arXiv Detail & Related papers (2024-08-02T20:15:15Z) - Enhanced Sharp-GAN For Histopathology Image Synthesis [63.845552349914186]
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection.
We propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization.
The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets.
arXiv Detail & Related papers (2023-01-24T17:54:01Z) - Robust deep learning for eye fundus images: Bridging real and synthetic data for enhancing generalization [0.8599177028761124]
This work compares ten different GAN architectures to generate synthetic eye-fundus images with and without AMD.
StyleGAN2 reached the lowest Frechet Inception Distance (166.17), and clinicians could not accurately differentiate between real and synthetic images.
The accuracy rates were 82.8% for the test set and 81.3% for the STARE dataset, demonstrating the model's generalizability.
arXiv Detail & Related papers (2022-03-25T18:42:20Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - SCREENet: A Multi-view Deep Convolutional Neural Network for
Classification of High-resolution Synthetic Mammographic Screening Scans [3.8137985834223502]
We develop and evaluate a multi-view deep learning approach to the analysis of high-resolution synthetic mammograms.
We assess the effect on accuracy of image resolution and training set size.
arXiv Detail & Related papers (2020-09-18T00:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.